00:00:00.001 Started by upstream project "autotest-per-patch" build number 132299 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.015 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.016 The recommended git tool is: git 00:00:00.016 using credential 00000000-0000-0000-0000-000000000002 00:00:00.019 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.037 Fetching changes from the remote Git repository 00:00:00.039 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.059 Using shallow fetch with depth 1 00:00:00.059 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.059 > git --version # timeout=10 00:00:00.094 > git --version # 'git version 2.39.2' 00:00:00.094 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.145 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.145 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.393 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.405 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.417 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:02.417 > git config core.sparsecheckout # timeout=10 00:00:02.429 > git read-tree -mu HEAD # timeout=10 00:00:02.445 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:02.464 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:02.464 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:02.581 [Pipeline] Start of Pipeline 00:00:02.597 [Pipeline] library 00:00:02.599 Loading library shm_lib@master 00:00:02.599 Library shm_lib@master is cached. Copying from home. 00:00:02.617 [Pipeline] node 00:00:02.627 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest_2 00:00:02.630 [Pipeline] { 00:00:02.644 [Pipeline] catchError 00:00:02.647 [Pipeline] { 00:00:02.660 [Pipeline] wrap 00:00:02.668 [Pipeline] { 00:00:02.676 [Pipeline] stage 00:00:02.678 [Pipeline] { (Prologue) 00:00:02.696 [Pipeline] echo 00:00:02.697 Node: VM-host-WFP7 00:00:02.703 [Pipeline] cleanWs 00:00:02.712 [WS-CLEANUP] Deleting project workspace... 00:00:02.712 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.717 [WS-CLEANUP] done 00:00:02.902 [Pipeline] setCustomBuildProperty 00:00:02.979 [Pipeline] httpRequest 00:00:03.383 [Pipeline] echo 00:00:03.384 Sorcerer 10.211.164.101 is alive 00:00:03.392 [Pipeline] retry 00:00:03.393 [Pipeline] { 00:00:03.402 [Pipeline] httpRequest 00:00:03.406 HttpMethod: GET 00:00:03.406 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:03.407 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:03.408 Response Code: HTTP/1.1 200 OK 00:00:03.408 Success: Status code 200 is in the accepted range: 200,404 00:00:03.409 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:03.553 [Pipeline] } 00:00:03.570 [Pipeline] // retry 00:00:03.577 [Pipeline] sh 00:00:03.860 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:03.874 [Pipeline] httpRequest 00:00:04.766 [Pipeline] echo 00:00:04.767 Sorcerer 10.211.164.101 is alive 00:00:04.773 [Pipeline] retry 00:00:04.774 [Pipeline] { 00:00:04.783 [Pipeline] httpRequest 00:00:04.786 HttpMethod: GET 00:00:04.786 URL: http://10.211.164.101/packages/spdk_1a15c71369c252ac1e9708e5f9f66717e728df6f.tar.gz 00:00:04.787 Sending request to url: http://10.211.164.101/packages/spdk_1a15c71369c252ac1e9708e5f9f66717e728df6f.tar.gz 00:00:04.788 Response Code: HTTP/1.1 200 OK 00:00:04.789 Success: Status code 200 is in the accepted range: 200,404 00:00:04.789 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_1a15c71369c252ac1e9708e5f9f66717e728df6f.tar.gz 00:00:20.114 [Pipeline] } 00:00:20.132 [Pipeline] // retry 00:00:20.140 [Pipeline] sh 00:00:20.425 + tar --no-same-owner -xf spdk_1a15c71369c252ac1e9708e5f9f66717e728df6f.tar.gz 00:00:23.010 [Pipeline] sh 00:00:23.303 + git -C spdk log --oneline -n5 00:00:23.303 1a15c7136 lib/nvme: destruct controllers that failed init asynchronously 00:00:23.303 d1c46ed8e lib/rdma_provider: Add API to check if accel seq supported 00:00:23.303 a59d7e018 lib/mlx5: Add API to check if UMR registration supported 00:00:23.303 f6925f5e4 accel/mlx5: Merge crypto+copy to reg UMR 00:00:23.303 008a6371b accel/mlx5: Initial implementation of mlx5 platform driver 00:00:23.326 [Pipeline] writeFile 00:00:23.340 [Pipeline] sh 00:00:23.633 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:23.649 [Pipeline] sh 00:00:23.961 + cat autorun-spdk.conf 00:00:23.961 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:23.961 SPDK_RUN_ASAN=1 00:00:23.961 SPDK_RUN_UBSAN=1 00:00:23.961 SPDK_TEST_RAID=1 00:00:23.961 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:23.971 RUN_NIGHTLY=0 00:00:23.973 [Pipeline] } 00:00:23.987 [Pipeline] // stage 00:00:24.003 [Pipeline] stage 00:00:24.006 [Pipeline] { (Run VM) 00:00:24.020 [Pipeline] sh 00:00:24.311 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:24.311 + echo 'Start stage prepare_nvme.sh' 00:00:24.312 Start stage prepare_nvme.sh 00:00:24.312 + [[ -n 1 ]] 00:00:24.312 + disk_prefix=ex1 00:00:24.312 + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]] 00:00:24.312 + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]] 00:00:24.312 + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf 00:00:24.312 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:24.312 ++ SPDK_RUN_ASAN=1 00:00:24.312 ++ SPDK_RUN_UBSAN=1 00:00:24.312 ++ SPDK_TEST_RAID=1 00:00:24.312 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:24.312 ++ RUN_NIGHTLY=0 00:00:24.312 + cd /var/jenkins/workspace/raid-vg-autotest_2 00:00:24.312 + nvme_files=() 00:00:24.312 + declare -A nvme_files 00:00:24.312 + backend_dir=/var/lib/libvirt/images/backends 00:00:24.312 + nvme_files['nvme.img']=5G 00:00:24.312 + nvme_files['nvme-cmb.img']=5G 00:00:24.312 + nvme_files['nvme-multi0.img']=4G 00:00:24.312 + nvme_files['nvme-multi1.img']=4G 00:00:24.312 + nvme_files['nvme-multi2.img']=4G 00:00:24.312 + nvme_files['nvme-openstack.img']=8G 00:00:24.312 + nvme_files['nvme-zns.img']=5G 00:00:24.312 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:24.312 + (( SPDK_TEST_FTL == 1 )) 00:00:24.312 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:24.312 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:24.312 + for nvme in "${!nvme_files[@]}" 00:00:24.312 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:00:24.312 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:24.312 + for nvme in "${!nvme_files[@]}" 00:00:24.312 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:00:24.312 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:24.312 + for nvme in "${!nvme_files[@]}" 00:00:24.312 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:00:24.312 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:24.312 + for nvme in "${!nvme_files[@]}" 00:00:24.312 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:00:24.312 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:24.312 + for nvme in "${!nvme_files[@]}" 00:00:24.312 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:00:24.312 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:24.312 + for nvme in "${!nvme_files[@]}" 00:00:24.312 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:00:24.312 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:24.312 + for nvme in "${!nvme_files[@]}" 00:00:24.312 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:00:25.263 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:25.263 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:00:25.263 + echo 'End stage prepare_nvme.sh' 00:00:25.263 End stage prepare_nvme.sh 00:00:25.275 [Pipeline] sh 00:00:25.558 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:25.558 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:00:25.558 00:00:25.558 DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant 00:00:25.558 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk 00:00:25.558 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2 00:00:25.558 HELP=0 00:00:25.558 DRY_RUN=0 00:00:25.558 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:00:25.558 NVME_DISKS_TYPE=nvme,nvme, 00:00:25.558 NVME_AUTO_CREATE=0 00:00:25.558 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:00:25.558 NVME_CMB=,, 00:00:25.558 NVME_PMR=,, 00:00:25.558 NVME_ZNS=,, 00:00:25.558 NVME_MS=,, 00:00:25.558 NVME_FDP=,, 00:00:25.558 SPDK_VAGRANT_DISTRO=fedora39 00:00:25.558 SPDK_VAGRANT_VMCPU=10 00:00:25.558 SPDK_VAGRANT_VMRAM=12288 00:00:25.558 SPDK_VAGRANT_PROVIDER=libvirt 00:00:25.558 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:25.558 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:25.558 SPDK_OPENSTACK_NETWORK=0 00:00:25.558 VAGRANT_PACKAGE_BOX=0 00:00:25.558 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:25.558 FORCE_DISTRO=true 00:00:25.558 VAGRANT_BOX_VERSION= 00:00:25.558 EXTRA_VAGRANTFILES= 00:00:25.558 NIC_MODEL=virtio 00:00:25.558 00:00:25.558 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt' 00:00:25.558 /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2 00:00:28.107 Bringing machine 'default' up with 'libvirt' provider... 00:00:28.107 ==> default: Creating image (snapshot of base box volume). 00:00:28.366 ==> default: Creating domain with the following settings... 00:00:28.366 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731667534_1cf71ec9390129de6055 00:00:28.366 ==> default: -- Domain type: kvm 00:00:28.366 ==> default: -- Cpus: 10 00:00:28.366 ==> default: -- Feature: acpi 00:00:28.366 ==> default: -- Feature: apic 00:00:28.366 ==> default: -- Feature: pae 00:00:28.366 ==> default: -- Memory: 12288M 00:00:28.366 ==> default: -- Memory Backing: hugepages: 00:00:28.366 ==> default: -- Management MAC: 00:00:28.366 ==> default: -- Loader: 00:00:28.366 ==> default: -- Nvram: 00:00:28.366 ==> default: -- Base box: spdk/fedora39 00:00:28.366 ==> default: -- Storage pool: default 00:00:28.366 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731667534_1cf71ec9390129de6055.img (20G) 00:00:28.366 ==> default: -- Volume Cache: default 00:00:28.366 ==> default: -- Kernel: 00:00:28.366 ==> default: -- Initrd: 00:00:28.366 ==> default: -- Graphics Type: vnc 00:00:28.366 ==> default: -- Graphics Port: -1 00:00:28.366 ==> default: -- Graphics IP: 127.0.0.1 00:00:28.366 ==> default: -- Graphics Password: Not defined 00:00:28.366 ==> default: -- Video Type: cirrus 00:00:28.366 ==> default: -- Video VRAM: 9216 00:00:28.366 ==> default: -- Sound Type: 00:00:28.367 ==> default: -- Keymap: en-us 00:00:28.367 ==> default: -- TPM Path: 00:00:28.367 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:28.367 ==> default: -- Command line args: 00:00:28.367 ==> default: -> value=-device, 00:00:28.367 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:28.367 ==> default: -> value=-drive, 00:00:28.367 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:00:28.367 ==> default: -> value=-device, 00:00:28.367 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:28.367 ==> default: -> value=-device, 00:00:28.367 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:28.367 ==> default: -> value=-drive, 00:00:28.367 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:28.367 ==> default: -> value=-device, 00:00:28.367 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:28.367 ==> default: -> value=-drive, 00:00:28.367 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:28.367 ==> default: -> value=-device, 00:00:28.367 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:28.367 ==> default: -> value=-drive, 00:00:28.367 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:28.367 ==> default: -> value=-device, 00:00:28.367 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:28.367 ==> default: Creating shared folders metadata... 00:00:28.367 ==> default: Starting domain. 00:00:29.751 ==> default: Waiting for domain to get an IP address... 00:00:47.881 ==> default: Waiting for SSH to become available... 00:00:48.818 ==> default: Configuring and enabling network interfaces... 00:00:55.392 default: SSH address: 192.168.121.181:22 00:00:55.392 default: SSH username: vagrant 00:00:55.392 default: SSH auth method: private key 00:00:58.724 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:06.858 ==> default: Mounting SSHFS shared folder... 00:01:08.763 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:08.763 ==> default: Checking Mount.. 00:01:09.720 ==> default: Folder Successfully Mounted! 00:01:09.720 ==> default: Running provisioner: file... 00:01:10.657 default: ~/.gitconfig => .gitconfig 00:01:11.226 00:01:11.226 SUCCESS! 00:01:11.226 00:01:11.226 cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:11.226 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:11.226 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:11.226 00:01:11.236 [Pipeline] } 00:01:11.252 [Pipeline] // stage 00:01:11.261 [Pipeline] dir 00:01:11.262 Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt 00:01:11.263 [Pipeline] { 00:01:11.276 [Pipeline] catchError 00:01:11.278 [Pipeline] { 00:01:11.293 [Pipeline] sh 00:01:11.576 + vagrant ssh-config --host vagrant 00:01:11.576 + sed -ne /^Host/,$p 00:01:11.576 + tee ssh_conf 00:01:14.869 Host vagrant 00:01:14.869 HostName 192.168.121.181 00:01:14.869 User vagrant 00:01:14.869 Port 22 00:01:14.869 UserKnownHostsFile /dev/null 00:01:14.869 StrictHostKeyChecking no 00:01:14.869 PasswordAuthentication no 00:01:14.869 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:14.869 IdentitiesOnly yes 00:01:14.869 LogLevel FATAL 00:01:14.869 ForwardAgent yes 00:01:14.869 ForwardX11 yes 00:01:14.869 00:01:14.883 [Pipeline] withEnv 00:01:14.886 [Pipeline] { 00:01:14.901 [Pipeline] sh 00:01:15.184 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:15.184 source /etc/os-release 00:01:15.184 [[ -e /image.version ]] && img=$(< /image.version) 00:01:15.184 # Minimal, systemd-like check. 00:01:15.184 if [[ -e /.dockerenv ]]; then 00:01:15.184 # Clear garbage from the node's name: 00:01:15.184 # agt-er_autotest_547-896 -> autotest_547-896 00:01:15.184 # $HOSTNAME is the actual container id 00:01:15.184 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:15.184 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:15.184 # We can assume this is a mount from a host where container is running, 00:01:15.184 # so fetch its hostname to easily identify the target swarm worker. 00:01:15.184 container="$(< /etc/hostname) ($agent)" 00:01:15.184 else 00:01:15.184 # Fallback 00:01:15.184 container=$agent 00:01:15.184 fi 00:01:15.184 fi 00:01:15.184 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:15.184 00:01:15.456 [Pipeline] } 00:01:15.473 [Pipeline] // withEnv 00:01:15.481 [Pipeline] setCustomBuildProperty 00:01:15.496 [Pipeline] stage 00:01:15.498 [Pipeline] { (Tests) 00:01:15.514 [Pipeline] sh 00:01:15.795 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:16.069 [Pipeline] sh 00:01:16.387 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:16.663 [Pipeline] timeout 00:01:16.663 Timeout set to expire in 1 hr 30 min 00:01:16.665 [Pipeline] { 00:01:16.681 [Pipeline] sh 00:01:16.963 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:17.532 HEAD is now at 1a15c7136 lib/nvme: destruct controllers that failed init asynchronously 00:01:17.547 [Pipeline] sh 00:01:17.832 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:18.105 [Pipeline] sh 00:01:18.386 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:18.663 [Pipeline] sh 00:01:18.947 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:19.206 ++ readlink -f spdk_repo 00:01:19.206 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:19.206 + [[ -n /home/vagrant/spdk_repo ]] 00:01:19.206 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:19.206 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:19.206 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:19.206 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:19.206 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:19.206 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:19.206 + cd /home/vagrant/spdk_repo 00:01:19.206 + source /etc/os-release 00:01:19.206 ++ NAME='Fedora Linux' 00:01:19.206 ++ VERSION='39 (Cloud Edition)' 00:01:19.206 ++ ID=fedora 00:01:19.206 ++ VERSION_ID=39 00:01:19.206 ++ VERSION_CODENAME= 00:01:19.206 ++ PLATFORM_ID=platform:f39 00:01:19.206 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:19.206 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:19.206 ++ LOGO=fedora-logo-icon 00:01:19.206 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:19.206 ++ HOME_URL=https://fedoraproject.org/ 00:01:19.206 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:19.206 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:19.206 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:19.206 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:19.206 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:19.206 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:19.206 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:19.206 ++ SUPPORT_END=2024-11-12 00:01:19.206 ++ VARIANT='Cloud Edition' 00:01:19.206 ++ VARIANT_ID=cloud 00:01:19.206 + uname -a 00:01:19.206 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:19.206 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:19.774 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:19.774 Hugepages 00:01:19.774 node hugesize free / total 00:01:19.774 node0 1048576kB 0 / 0 00:01:19.774 node0 2048kB 0 / 0 00:01:19.774 00:01:19.774 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:19.774 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:19.774 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:19.774 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:19.774 + rm -f /tmp/spdk-ld-path 00:01:19.774 + source autorun-spdk.conf 00:01:19.774 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.774 ++ SPDK_RUN_ASAN=1 00:01:19.774 ++ SPDK_RUN_UBSAN=1 00:01:19.774 ++ SPDK_TEST_RAID=1 00:01:19.774 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:19.774 ++ RUN_NIGHTLY=0 00:01:19.774 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:19.774 + [[ -n '' ]] 00:01:19.774 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:19.774 + for M in /var/spdk/build-*-manifest.txt 00:01:19.774 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:19.774 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:19.774 + for M in /var/spdk/build-*-manifest.txt 00:01:19.774 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:19.774 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:19.774 + for M in /var/spdk/build-*-manifest.txt 00:01:19.774 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:19.774 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:19.774 ++ uname 00:01:19.774 + [[ Linux == \L\i\n\u\x ]] 00:01:19.774 + sudo dmesg -T 00:01:20.035 + sudo dmesg --clear 00:01:20.035 + dmesg_pid=5419 00:01:20.035 + sudo dmesg -Tw 00:01:20.035 + [[ Fedora Linux == FreeBSD ]] 00:01:20.035 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:20.035 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:20.035 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:20.035 + [[ -x /usr/src/fio-static/fio ]] 00:01:20.035 + export FIO_BIN=/usr/src/fio-static/fio 00:01:20.035 + FIO_BIN=/usr/src/fio-static/fio 00:01:20.035 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:20.035 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:20.035 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:20.035 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:20.035 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:20.035 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:20.035 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:20.035 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:20.035 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:20.035 10:46:26 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:20.035 10:46:26 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:20.035 10:46:26 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.035 10:46:26 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:20.035 10:46:26 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:20.035 10:46:26 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:20.035 10:46:26 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.035 10:46:26 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:20.035 10:46:26 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:20.035 10:46:26 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:20.035 10:46:26 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:20.035 10:46:26 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:20.035 10:46:26 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:20.035 10:46:26 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:20.035 10:46:26 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:20.035 10:46:26 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:20.035 10:46:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.035 10:46:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.035 10:46:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.035 10:46:26 -- paths/export.sh@5 -- $ export PATH 00:01:20.035 10:46:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.035 10:46:26 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:20.035 10:46:26 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:20.035 10:46:26 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731667586.XXXXXX 00:01:20.035 10:46:26 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731667586.kVNtYZ 00:01:20.035 10:46:26 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:20.035 10:46:26 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:20.035 10:46:26 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:20.035 10:46:26 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:20.035 10:46:26 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:20.035 10:46:26 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:20.035 10:46:26 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:20.035 10:46:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.035 10:46:26 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:20.035 10:46:26 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:20.035 10:46:26 -- pm/common@17 -- $ local monitor 00:01:20.036 10:46:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:20.036 10:46:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:20.036 10:46:26 -- pm/common@25 -- $ sleep 1 00:01:20.036 10:46:26 -- pm/common@21 -- $ date +%s 00:01:20.036 10:46:26 -- pm/common@21 -- $ date +%s 00:01:20.036 10:46:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731667586 00:01:20.036 10:46:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731667586 00:01:20.295 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731667586_collect-vmstat.pm.log 00:01:20.295 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731667586_collect-cpu-load.pm.log 00:01:21.232 10:46:27 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:21.232 10:46:27 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:21.232 10:46:27 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:21.232 10:46:27 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:21.232 10:46:27 -- spdk/autobuild.sh@16 -- $ date -u 00:01:21.232 Fri Nov 15 10:46:27 AM UTC 2024 00:01:21.232 10:46:27 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:21.232 v25.01-pre-171-g1a15c7136 00:01:21.232 10:46:27 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:21.232 10:46:27 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:21.232 10:46:27 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:21.232 10:46:27 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:21.232 10:46:27 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.232 ************************************ 00:01:21.232 START TEST asan 00:01:21.232 ************************************ 00:01:21.232 10:46:27 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:01:21.232 using asan 00:01:21.232 00:01:21.232 real 0m0.000s 00:01:21.232 user 0m0.000s 00:01:21.232 sys 0m0.000s 00:01:21.232 10:46:27 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:21.232 10:46:27 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:21.232 ************************************ 00:01:21.232 END TEST asan 00:01:21.232 ************************************ 00:01:21.232 10:46:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:21.232 10:46:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:21.232 10:46:28 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:21.232 10:46:28 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:21.233 10:46:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.233 ************************************ 00:01:21.233 START TEST ubsan 00:01:21.233 ************************************ 00:01:21.233 using ubsan 00:01:21.233 10:46:28 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:21.233 00:01:21.233 real 0m0.000s 00:01:21.233 user 0m0.000s 00:01:21.233 sys 0m0.000s 00:01:21.233 10:46:28 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:21.233 10:46:28 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:21.233 ************************************ 00:01:21.233 END TEST ubsan 00:01:21.233 ************************************ 00:01:21.233 10:46:28 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:21.233 10:46:28 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:21.233 10:46:28 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:21.233 10:46:28 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:21.233 10:46:28 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:21.233 10:46:28 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:21.233 10:46:28 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:21.233 10:46:28 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:21.233 10:46:28 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:21.500 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:21.500 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:22.070 Using 'verbs' RDMA provider 00:01:37.917 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:52.826 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:52.826 Creating mk/config.mk...done. 00:01:52.826 Creating mk/cc.flags.mk...done. 00:01:52.826 Type 'make' to build. 00:01:52.826 10:46:59 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:52.826 10:46:59 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:52.826 10:46:59 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:52.826 10:46:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.085 ************************************ 00:01:53.085 START TEST make 00:01:53.085 ************************************ 00:01:53.085 10:46:59 make -- common/autotest_common.sh@1127 -- $ make -j10 00:01:53.344 make[1]: Nothing to be done for 'all'. 00:02:08.260 The Meson build system 00:02:08.260 Version: 1.5.0 00:02:08.260 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:08.260 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:08.260 Build type: native build 00:02:08.260 Program cat found: YES (/usr/bin/cat) 00:02:08.260 Project name: DPDK 00:02:08.260 Project version: 24.03.0 00:02:08.260 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:08.260 C linker for the host machine: cc ld.bfd 2.40-14 00:02:08.260 Host machine cpu family: x86_64 00:02:08.260 Host machine cpu: x86_64 00:02:08.260 Message: ## Building in Developer Mode ## 00:02:08.260 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:08.260 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:08.260 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:08.260 Program python3 found: YES (/usr/bin/python3) 00:02:08.260 Program cat found: YES (/usr/bin/cat) 00:02:08.260 Compiler for C supports arguments -march=native: YES 00:02:08.260 Checking for size of "void *" : 8 00:02:08.260 Checking for size of "void *" : 8 (cached) 00:02:08.260 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:08.260 Library m found: YES 00:02:08.260 Library numa found: YES 00:02:08.260 Has header "numaif.h" : YES 00:02:08.260 Library fdt found: NO 00:02:08.260 Library execinfo found: NO 00:02:08.260 Has header "execinfo.h" : YES 00:02:08.260 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:08.260 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:08.260 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:08.260 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:08.260 Run-time dependency openssl found: YES 3.1.1 00:02:08.260 Run-time dependency libpcap found: YES 1.10.4 00:02:08.260 Has header "pcap.h" with dependency libpcap: YES 00:02:08.260 Compiler for C supports arguments -Wcast-qual: YES 00:02:08.260 Compiler for C supports arguments -Wdeprecated: YES 00:02:08.260 Compiler for C supports arguments -Wformat: YES 00:02:08.260 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:08.260 Compiler for C supports arguments -Wformat-security: NO 00:02:08.260 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:08.260 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:08.260 Compiler for C supports arguments -Wnested-externs: YES 00:02:08.260 Compiler for C supports arguments -Wold-style-definition: YES 00:02:08.260 Compiler for C supports arguments -Wpointer-arith: YES 00:02:08.260 Compiler for C supports arguments -Wsign-compare: YES 00:02:08.260 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:08.260 Compiler for C supports arguments -Wundef: YES 00:02:08.260 Compiler for C supports arguments -Wwrite-strings: YES 00:02:08.260 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:08.260 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:08.260 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:08.260 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:08.260 Program objdump found: YES (/usr/bin/objdump) 00:02:08.260 Compiler for C supports arguments -mavx512f: YES 00:02:08.260 Checking if "AVX512 checking" compiles: YES 00:02:08.260 Fetching value of define "__SSE4_2__" : 1 00:02:08.260 Fetching value of define "__AES__" : 1 00:02:08.260 Fetching value of define "__AVX__" : 1 00:02:08.260 Fetching value of define "__AVX2__" : 1 00:02:08.260 Fetching value of define "__AVX512BW__" : 1 00:02:08.260 Fetching value of define "__AVX512CD__" : 1 00:02:08.260 Fetching value of define "__AVX512DQ__" : 1 00:02:08.260 Fetching value of define "__AVX512F__" : 1 00:02:08.260 Fetching value of define "__AVX512VL__" : 1 00:02:08.260 Fetching value of define "__PCLMUL__" : 1 00:02:08.260 Fetching value of define "__RDRND__" : 1 00:02:08.260 Fetching value of define "__RDSEED__" : 1 00:02:08.260 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:08.260 Fetching value of define "__znver1__" : (undefined) 00:02:08.260 Fetching value of define "__znver2__" : (undefined) 00:02:08.260 Fetching value of define "__znver3__" : (undefined) 00:02:08.260 Fetching value of define "__znver4__" : (undefined) 00:02:08.260 Library asan found: YES 00:02:08.260 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:08.260 Message: lib/log: Defining dependency "log" 00:02:08.260 Message: lib/kvargs: Defining dependency "kvargs" 00:02:08.260 Message: lib/telemetry: Defining dependency "telemetry" 00:02:08.260 Library rt found: YES 00:02:08.260 Checking for function "getentropy" : NO 00:02:08.260 Message: lib/eal: Defining dependency "eal" 00:02:08.260 Message: lib/ring: Defining dependency "ring" 00:02:08.260 Message: lib/rcu: Defining dependency "rcu" 00:02:08.260 Message: lib/mempool: Defining dependency "mempool" 00:02:08.260 Message: lib/mbuf: Defining dependency "mbuf" 00:02:08.260 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:08.260 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:08.260 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:08.260 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:08.260 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:08.260 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:08.260 Compiler for C supports arguments -mpclmul: YES 00:02:08.260 Compiler for C supports arguments -maes: YES 00:02:08.260 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:08.260 Compiler for C supports arguments -mavx512bw: YES 00:02:08.260 Compiler for C supports arguments -mavx512dq: YES 00:02:08.260 Compiler for C supports arguments -mavx512vl: YES 00:02:08.260 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:08.260 Compiler for C supports arguments -mavx2: YES 00:02:08.260 Compiler for C supports arguments -mavx: YES 00:02:08.260 Message: lib/net: Defining dependency "net" 00:02:08.260 Message: lib/meter: Defining dependency "meter" 00:02:08.260 Message: lib/ethdev: Defining dependency "ethdev" 00:02:08.260 Message: lib/pci: Defining dependency "pci" 00:02:08.260 Message: lib/cmdline: Defining dependency "cmdline" 00:02:08.260 Message: lib/hash: Defining dependency "hash" 00:02:08.260 Message: lib/timer: Defining dependency "timer" 00:02:08.260 Message: lib/compressdev: Defining dependency "compressdev" 00:02:08.260 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:08.260 Message: lib/dmadev: Defining dependency "dmadev" 00:02:08.260 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:08.260 Message: lib/power: Defining dependency "power" 00:02:08.260 Message: lib/reorder: Defining dependency "reorder" 00:02:08.260 Message: lib/security: Defining dependency "security" 00:02:08.260 Has header "linux/userfaultfd.h" : YES 00:02:08.260 Has header "linux/vduse.h" : YES 00:02:08.260 Message: lib/vhost: Defining dependency "vhost" 00:02:08.260 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:08.260 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:08.260 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:08.260 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:08.260 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:08.261 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:08.261 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:08.261 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:08.261 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:08.261 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:08.261 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:08.261 Configuring doxy-api-html.conf using configuration 00:02:08.261 Configuring doxy-api-man.conf using configuration 00:02:08.261 Program mandb found: YES (/usr/bin/mandb) 00:02:08.261 Program sphinx-build found: NO 00:02:08.261 Configuring rte_build_config.h using configuration 00:02:08.261 Message: 00:02:08.261 ================= 00:02:08.261 Applications Enabled 00:02:08.261 ================= 00:02:08.261 00:02:08.261 apps: 00:02:08.261 00:02:08.261 00:02:08.261 Message: 00:02:08.261 ================= 00:02:08.261 Libraries Enabled 00:02:08.261 ================= 00:02:08.261 00:02:08.261 libs: 00:02:08.261 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:08.261 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:08.261 cryptodev, dmadev, power, reorder, security, vhost, 00:02:08.261 00:02:08.261 Message: 00:02:08.261 =============== 00:02:08.261 Drivers Enabled 00:02:08.261 =============== 00:02:08.261 00:02:08.261 common: 00:02:08.261 00:02:08.261 bus: 00:02:08.261 pci, vdev, 00:02:08.261 mempool: 00:02:08.261 ring, 00:02:08.261 dma: 00:02:08.261 00:02:08.261 net: 00:02:08.261 00:02:08.261 crypto: 00:02:08.261 00:02:08.261 compress: 00:02:08.261 00:02:08.261 vdpa: 00:02:08.261 00:02:08.261 00:02:08.261 Message: 00:02:08.261 ================= 00:02:08.261 Content Skipped 00:02:08.261 ================= 00:02:08.261 00:02:08.261 apps: 00:02:08.261 dumpcap: explicitly disabled via build config 00:02:08.261 graph: explicitly disabled via build config 00:02:08.261 pdump: explicitly disabled via build config 00:02:08.261 proc-info: explicitly disabled via build config 00:02:08.261 test-acl: explicitly disabled via build config 00:02:08.261 test-bbdev: explicitly disabled via build config 00:02:08.261 test-cmdline: explicitly disabled via build config 00:02:08.261 test-compress-perf: explicitly disabled via build config 00:02:08.261 test-crypto-perf: explicitly disabled via build config 00:02:08.261 test-dma-perf: explicitly disabled via build config 00:02:08.261 test-eventdev: explicitly disabled via build config 00:02:08.261 test-fib: explicitly disabled via build config 00:02:08.261 test-flow-perf: explicitly disabled via build config 00:02:08.261 test-gpudev: explicitly disabled via build config 00:02:08.261 test-mldev: explicitly disabled via build config 00:02:08.261 test-pipeline: explicitly disabled via build config 00:02:08.261 test-pmd: explicitly disabled via build config 00:02:08.261 test-regex: explicitly disabled via build config 00:02:08.261 test-sad: explicitly disabled via build config 00:02:08.261 test-security-perf: explicitly disabled via build config 00:02:08.261 00:02:08.261 libs: 00:02:08.261 argparse: explicitly disabled via build config 00:02:08.261 metrics: explicitly disabled via build config 00:02:08.261 acl: explicitly disabled via build config 00:02:08.261 bbdev: explicitly disabled via build config 00:02:08.261 bitratestats: explicitly disabled via build config 00:02:08.261 bpf: explicitly disabled via build config 00:02:08.261 cfgfile: explicitly disabled via build config 00:02:08.261 distributor: explicitly disabled via build config 00:02:08.261 efd: explicitly disabled via build config 00:02:08.261 eventdev: explicitly disabled via build config 00:02:08.261 dispatcher: explicitly disabled via build config 00:02:08.261 gpudev: explicitly disabled via build config 00:02:08.261 gro: explicitly disabled via build config 00:02:08.261 gso: explicitly disabled via build config 00:02:08.261 ip_frag: explicitly disabled via build config 00:02:08.261 jobstats: explicitly disabled via build config 00:02:08.261 latencystats: explicitly disabled via build config 00:02:08.261 lpm: explicitly disabled via build config 00:02:08.261 member: explicitly disabled via build config 00:02:08.261 pcapng: explicitly disabled via build config 00:02:08.261 rawdev: explicitly disabled via build config 00:02:08.261 regexdev: explicitly disabled via build config 00:02:08.261 mldev: explicitly disabled via build config 00:02:08.261 rib: explicitly disabled via build config 00:02:08.261 sched: explicitly disabled via build config 00:02:08.261 stack: explicitly disabled via build config 00:02:08.261 ipsec: explicitly disabled via build config 00:02:08.261 pdcp: explicitly disabled via build config 00:02:08.261 fib: explicitly disabled via build config 00:02:08.261 port: explicitly disabled via build config 00:02:08.261 pdump: explicitly disabled via build config 00:02:08.261 table: explicitly disabled via build config 00:02:08.261 pipeline: explicitly disabled via build config 00:02:08.261 graph: explicitly disabled via build config 00:02:08.261 node: explicitly disabled via build config 00:02:08.261 00:02:08.261 drivers: 00:02:08.261 common/cpt: not in enabled drivers build config 00:02:08.261 common/dpaax: not in enabled drivers build config 00:02:08.261 common/iavf: not in enabled drivers build config 00:02:08.261 common/idpf: not in enabled drivers build config 00:02:08.261 common/ionic: not in enabled drivers build config 00:02:08.261 common/mvep: not in enabled drivers build config 00:02:08.261 common/octeontx: not in enabled drivers build config 00:02:08.261 bus/auxiliary: not in enabled drivers build config 00:02:08.261 bus/cdx: not in enabled drivers build config 00:02:08.261 bus/dpaa: not in enabled drivers build config 00:02:08.261 bus/fslmc: not in enabled drivers build config 00:02:08.261 bus/ifpga: not in enabled drivers build config 00:02:08.261 bus/platform: not in enabled drivers build config 00:02:08.261 bus/uacce: not in enabled drivers build config 00:02:08.261 bus/vmbus: not in enabled drivers build config 00:02:08.261 common/cnxk: not in enabled drivers build config 00:02:08.261 common/mlx5: not in enabled drivers build config 00:02:08.261 common/nfp: not in enabled drivers build config 00:02:08.261 common/nitrox: not in enabled drivers build config 00:02:08.261 common/qat: not in enabled drivers build config 00:02:08.261 common/sfc_efx: not in enabled drivers build config 00:02:08.261 mempool/bucket: not in enabled drivers build config 00:02:08.261 mempool/cnxk: not in enabled drivers build config 00:02:08.261 mempool/dpaa: not in enabled drivers build config 00:02:08.261 mempool/dpaa2: not in enabled drivers build config 00:02:08.261 mempool/octeontx: not in enabled drivers build config 00:02:08.261 mempool/stack: not in enabled drivers build config 00:02:08.261 dma/cnxk: not in enabled drivers build config 00:02:08.261 dma/dpaa: not in enabled drivers build config 00:02:08.261 dma/dpaa2: not in enabled drivers build config 00:02:08.261 dma/hisilicon: not in enabled drivers build config 00:02:08.261 dma/idxd: not in enabled drivers build config 00:02:08.261 dma/ioat: not in enabled drivers build config 00:02:08.261 dma/skeleton: not in enabled drivers build config 00:02:08.261 net/af_packet: not in enabled drivers build config 00:02:08.261 net/af_xdp: not in enabled drivers build config 00:02:08.261 net/ark: not in enabled drivers build config 00:02:08.261 net/atlantic: not in enabled drivers build config 00:02:08.261 net/avp: not in enabled drivers build config 00:02:08.261 net/axgbe: not in enabled drivers build config 00:02:08.261 net/bnx2x: not in enabled drivers build config 00:02:08.261 net/bnxt: not in enabled drivers build config 00:02:08.261 net/bonding: not in enabled drivers build config 00:02:08.261 net/cnxk: not in enabled drivers build config 00:02:08.261 net/cpfl: not in enabled drivers build config 00:02:08.261 net/cxgbe: not in enabled drivers build config 00:02:08.261 net/dpaa: not in enabled drivers build config 00:02:08.261 net/dpaa2: not in enabled drivers build config 00:02:08.261 net/e1000: not in enabled drivers build config 00:02:08.261 net/ena: not in enabled drivers build config 00:02:08.261 net/enetc: not in enabled drivers build config 00:02:08.261 net/enetfec: not in enabled drivers build config 00:02:08.261 net/enic: not in enabled drivers build config 00:02:08.261 net/failsafe: not in enabled drivers build config 00:02:08.261 net/fm10k: not in enabled drivers build config 00:02:08.261 net/gve: not in enabled drivers build config 00:02:08.261 net/hinic: not in enabled drivers build config 00:02:08.261 net/hns3: not in enabled drivers build config 00:02:08.261 net/i40e: not in enabled drivers build config 00:02:08.261 net/iavf: not in enabled drivers build config 00:02:08.261 net/ice: not in enabled drivers build config 00:02:08.261 net/idpf: not in enabled drivers build config 00:02:08.261 net/igc: not in enabled drivers build config 00:02:08.261 net/ionic: not in enabled drivers build config 00:02:08.261 net/ipn3ke: not in enabled drivers build config 00:02:08.261 net/ixgbe: not in enabled drivers build config 00:02:08.261 net/mana: not in enabled drivers build config 00:02:08.261 net/memif: not in enabled drivers build config 00:02:08.261 net/mlx4: not in enabled drivers build config 00:02:08.261 net/mlx5: not in enabled drivers build config 00:02:08.261 net/mvneta: not in enabled drivers build config 00:02:08.261 net/mvpp2: not in enabled drivers build config 00:02:08.261 net/netvsc: not in enabled drivers build config 00:02:08.261 net/nfb: not in enabled drivers build config 00:02:08.261 net/nfp: not in enabled drivers build config 00:02:08.261 net/ngbe: not in enabled drivers build config 00:02:08.261 net/null: not in enabled drivers build config 00:02:08.261 net/octeontx: not in enabled drivers build config 00:02:08.261 net/octeon_ep: not in enabled drivers build config 00:02:08.261 net/pcap: not in enabled drivers build config 00:02:08.261 net/pfe: not in enabled drivers build config 00:02:08.261 net/qede: not in enabled drivers build config 00:02:08.261 net/ring: not in enabled drivers build config 00:02:08.261 net/sfc: not in enabled drivers build config 00:02:08.261 net/softnic: not in enabled drivers build config 00:02:08.261 net/tap: not in enabled drivers build config 00:02:08.261 net/thunderx: not in enabled drivers build config 00:02:08.261 net/txgbe: not in enabled drivers build config 00:02:08.261 net/vdev_netvsc: not in enabled drivers build config 00:02:08.261 net/vhost: not in enabled drivers build config 00:02:08.261 net/virtio: not in enabled drivers build config 00:02:08.262 net/vmxnet3: not in enabled drivers build config 00:02:08.262 raw/*: missing internal dependency, "rawdev" 00:02:08.262 crypto/armv8: not in enabled drivers build config 00:02:08.262 crypto/bcmfs: not in enabled drivers build config 00:02:08.262 crypto/caam_jr: not in enabled drivers build config 00:02:08.262 crypto/ccp: not in enabled drivers build config 00:02:08.262 crypto/cnxk: not in enabled drivers build config 00:02:08.262 crypto/dpaa_sec: not in enabled drivers build config 00:02:08.262 crypto/dpaa2_sec: not in enabled drivers build config 00:02:08.262 crypto/ipsec_mb: not in enabled drivers build config 00:02:08.262 crypto/mlx5: not in enabled drivers build config 00:02:08.262 crypto/mvsam: not in enabled drivers build config 00:02:08.262 crypto/nitrox: not in enabled drivers build config 00:02:08.262 crypto/null: not in enabled drivers build config 00:02:08.262 crypto/octeontx: not in enabled drivers build config 00:02:08.262 crypto/openssl: not in enabled drivers build config 00:02:08.262 crypto/scheduler: not in enabled drivers build config 00:02:08.262 crypto/uadk: not in enabled drivers build config 00:02:08.262 crypto/virtio: not in enabled drivers build config 00:02:08.262 compress/isal: not in enabled drivers build config 00:02:08.262 compress/mlx5: not in enabled drivers build config 00:02:08.262 compress/nitrox: not in enabled drivers build config 00:02:08.262 compress/octeontx: not in enabled drivers build config 00:02:08.262 compress/zlib: not in enabled drivers build config 00:02:08.262 regex/*: missing internal dependency, "regexdev" 00:02:08.262 ml/*: missing internal dependency, "mldev" 00:02:08.262 vdpa/ifc: not in enabled drivers build config 00:02:08.262 vdpa/mlx5: not in enabled drivers build config 00:02:08.262 vdpa/nfp: not in enabled drivers build config 00:02:08.262 vdpa/sfc: not in enabled drivers build config 00:02:08.262 event/*: missing internal dependency, "eventdev" 00:02:08.262 baseband/*: missing internal dependency, "bbdev" 00:02:08.262 gpu/*: missing internal dependency, "gpudev" 00:02:08.262 00:02:08.262 00:02:08.262 Build targets in project: 85 00:02:08.262 00:02:08.262 DPDK 24.03.0 00:02:08.262 00:02:08.262 User defined options 00:02:08.262 buildtype : debug 00:02:08.262 default_library : shared 00:02:08.262 libdir : lib 00:02:08.262 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:08.262 b_sanitize : address 00:02:08.262 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:08.262 c_link_args : 00:02:08.262 cpu_instruction_set: native 00:02:08.262 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:08.262 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:08.262 enable_docs : false 00:02:08.262 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:08.262 enable_kmods : false 00:02:08.262 max_lcores : 128 00:02:08.262 tests : false 00:02:08.262 00:02:08.262 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:08.262 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:08.262 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:08.262 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:08.262 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:08.262 [4/268] Linking static target lib/librte_log.a 00:02:08.262 [5/268] Linking static target lib/librte_kvargs.a 00:02:08.262 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:08.262 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:08.262 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:08.262 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:08.262 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:08.262 [11/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.262 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:08.262 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:08.262 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:08.262 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:08.262 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:08.262 [17/268] Linking static target lib/librte_telemetry.a 00:02:08.520 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:08.520 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:08.779 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.779 [21/268] Linking target lib/librte_log.so.24.1 00:02:08.779 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:08.779 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:08.779 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:09.039 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:09.039 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:09.039 [27/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:09.039 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:09.039 [29/268] Linking target lib/librte_kvargs.so.24.1 00:02:09.333 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:09.333 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:09.333 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.333 [33/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:09.333 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:09.616 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:09.616 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:09.616 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:09.616 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:09.616 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:09.616 [40/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:09.616 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:09.616 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:09.616 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:10.186 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:10.186 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:10.186 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:10.446 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:10.446 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:10.446 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:10.446 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:10.446 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:10.706 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:10.706 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:10.706 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:10.706 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:10.966 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:10.966 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:10.966 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:11.225 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:11.225 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:11.225 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:11.225 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:11.225 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:11.483 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:11.483 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:11.483 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:11.483 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:11.792 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:11.792 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:12.052 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:12.052 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:12.052 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:12.052 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:12.052 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:12.052 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:12.052 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:12.312 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:12.312 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:12.312 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:12.312 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:12.574 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:12.574 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:12.835 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:12.835 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:12.835 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:12.835 [86/268] Linking static target lib/librte_eal.a 00:02:12.835 [87/268] Linking static target lib/librte_ring.a 00:02:12.835 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:12.835 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:12.835 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:13.095 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:13.095 [92/268] Linking static target lib/librte_mempool.a 00:02:13.095 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:13.095 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:13.354 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.354 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:13.354 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:13.354 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:13.613 [99/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:13.613 [100/268] Linking static target lib/librte_rcu.a 00:02:13.613 [101/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:13.613 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:13.613 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:13.872 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:13.872 [105/268] Linking static target lib/librte_mbuf.a 00:02:13.872 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:13.872 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:13.872 [108/268] Linking static target lib/librte_meter.a 00:02:14.132 [109/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:14.132 [110/268] Linking static target lib/librte_net.a 00:02:14.132 [111/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.132 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:14.132 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:14.389 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.389 [115/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.389 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.389 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:14.648 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:14.907 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:14.907 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.907 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:15.166 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:15.166 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:15.425 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:15.425 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:15.425 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:15.685 [127/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:15.685 [128/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:15.685 [129/268] Linking static target lib/librte_pci.a 00:02:15.685 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:15.685 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:15.685 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:15.685 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:15.944 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:15.944 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:15.944 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:15.944 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:15.944 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:15.944 [139/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.944 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:15.944 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:15.944 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:16.202 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:16.202 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:16.202 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:16.202 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:16.202 [147/268] Linking static target lib/librte_cmdline.a 00:02:16.460 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:16.720 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:16.720 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:16.979 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:16.979 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:16.979 [153/268] Linking static target lib/librte_timer.a 00:02:16.979 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:16.979 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:17.238 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:17.498 [157/268] Linking static target lib/librte_ethdev.a 00:02:17.499 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:17.499 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:17.499 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:17.499 [161/268] Linking static target lib/librte_compressdev.a 00:02:17.759 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:17.759 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.759 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:17.759 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:17.759 [166/268] Linking static target lib/librte_hash.a 00:02:18.078 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:18.078 [168/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.078 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:18.078 [170/268] Linking static target lib/librte_dmadev.a 00:02:18.078 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:18.337 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:18.337 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:18.597 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.597 [175/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:18.857 [176/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:18.857 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:18.857 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:19.117 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:19.117 [180/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.117 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.377 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:19.377 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:19.377 [184/268] Linking static target lib/librte_power.a 00:02:19.635 [185/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:19.635 [186/268] Linking static target lib/librte_cryptodev.a 00:02:19.635 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:19.895 [188/268] Linking static target lib/librte_reorder.a 00:02:19.895 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:19.895 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:19.895 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:19.895 [192/268] Linking static target lib/librte_security.a 00:02:20.154 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:20.413 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.413 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:20.672 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.932 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.932 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:20.932 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:20.932 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:20.932 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:21.192 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:21.451 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:21.451 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:21.451 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:21.711 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:21.711 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:21.711 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:21.711 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:21.711 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:21.971 [211/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:21.971 [212/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:21.971 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:21.971 [214/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:21.971 [215/268] Linking static target drivers/librte_bus_pci.a 00:02:22.231 [216/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:22.231 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:22.231 [218/268] Linking static target drivers/librte_bus_vdev.a 00:02:22.231 [219/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.231 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:22.231 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:22.490 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:22.490 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:22.490 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:22.490 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:22.490 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.490 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.878 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:24.445 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.705 [230/268] Linking target lib/librte_eal.so.24.1 00:02:24.705 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:24.705 [232/268] Linking target lib/librte_ring.so.24.1 00:02:24.965 [233/268] Linking target lib/librte_meter.so.24.1 00:02:24.965 [234/268] Linking target lib/librte_pci.so.24.1 00:02:24.965 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:24.965 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:24.965 [237/268] Linking target lib/librte_timer.so.24.1 00:02:24.965 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:24.965 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:24.965 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:24.965 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:24.965 [242/268] Linking target lib/librte_rcu.so.24.1 00:02:24.965 [243/268] Linking target lib/librte_mempool.so.24.1 00:02:24.965 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:24.965 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:25.224 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:25.224 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:25.224 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:25.224 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:25.524 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:25.524 [251/268] Linking target lib/librte_reorder.so.24.1 00:02:25.524 [252/268] Linking target lib/librte_compressdev.so.24.1 00:02:25.524 [253/268] Linking target lib/librte_net.so.24.1 00:02:25.524 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:25.904 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:25.904 [256/268] Linking target lib/librte_cmdline.so.24.1 00:02:25.904 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:25.904 [258/268] Linking target lib/librte_hash.so.24.1 00:02:25.904 [259/268] Linking target lib/librte_security.so.24.1 00:02:25.904 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:26.472 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.731 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:26.731 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:26.731 [264/268] Linking target lib/librte_power.so.24.1 00:02:30.021 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:30.021 [266/268] Linking static target lib/librte_vhost.a 00:02:31.934 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.934 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:31.934 INFO: autodetecting backend as ninja 00:02:31.934 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:50.059 CC lib/ut_mock/mock.o 00:02:50.059 CC lib/log/log.o 00:02:50.059 CC lib/log/log_deprecated.o 00:02:50.059 CC lib/log/log_flags.o 00:02:50.059 CC lib/ut/ut.o 00:02:50.059 LIB libspdk_log.a 00:02:50.059 LIB libspdk_ut_mock.a 00:02:50.059 LIB libspdk_ut.a 00:02:50.059 SO libspdk_ut_mock.so.6.0 00:02:50.059 SO libspdk_log.so.7.1 00:02:50.059 SO libspdk_ut.so.2.0 00:02:50.059 SYMLINK libspdk_log.so 00:02:50.059 SYMLINK libspdk_ut_mock.so 00:02:50.059 SYMLINK libspdk_ut.so 00:02:50.059 CC lib/util/base64.o 00:02:50.059 CC lib/util/crc16.o 00:02:50.059 CC lib/util/bit_array.o 00:02:50.059 CC lib/util/cpuset.o 00:02:50.059 CC lib/util/crc32.o 00:02:50.059 CC lib/util/crc32c.o 00:02:50.059 CC lib/ioat/ioat.o 00:02:50.059 CXX lib/trace_parser/trace.o 00:02:50.059 CC lib/dma/dma.o 00:02:50.059 CC lib/vfio_user/host/vfio_user_pci.o 00:02:50.059 CC lib/util/crc32_ieee.o 00:02:50.318 CC lib/util/crc64.o 00:02:50.318 CC lib/vfio_user/host/vfio_user.o 00:02:50.318 CC lib/util/dif.o 00:02:50.318 CC lib/util/fd.o 00:02:50.318 CC lib/util/fd_group.o 00:02:50.318 LIB libspdk_dma.a 00:02:50.318 CC lib/util/file.o 00:02:50.318 CC lib/util/hexlify.o 00:02:50.318 SO libspdk_dma.so.5.0 00:02:50.318 SYMLINK libspdk_dma.so 00:02:50.318 LIB libspdk_ioat.a 00:02:50.318 CC lib/util/iov.o 00:02:50.318 CC lib/util/math.o 00:02:50.318 CC lib/util/net.o 00:02:50.318 SO libspdk_ioat.so.7.0 00:02:50.576 LIB libspdk_vfio_user.a 00:02:50.576 CC lib/util/pipe.o 00:02:50.576 CC lib/util/strerror_tls.o 00:02:50.576 SYMLINK libspdk_ioat.so 00:02:50.576 SO libspdk_vfio_user.so.5.0 00:02:50.576 CC lib/util/string.o 00:02:50.576 SYMLINK libspdk_vfio_user.so 00:02:50.576 CC lib/util/uuid.o 00:02:50.576 CC lib/util/xor.o 00:02:50.576 CC lib/util/zipf.o 00:02:50.576 CC lib/util/md5.o 00:02:50.836 LIB libspdk_util.a 00:02:51.095 SO libspdk_util.so.10.1 00:02:51.095 LIB libspdk_trace_parser.a 00:02:51.095 SYMLINK libspdk_util.so 00:02:51.095 SO libspdk_trace_parser.so.6.0 00:02:51.354 SYMLINK libspdk_trace_parser.so 00:02:51.354 CC lib/conf/conf.o 00:02:51.354 CC lib/env_dpdk/env.o 00:02:51.354 CC lib/env_dpdk/memory.o 00:02:51.354 CC lib/env_dpdk/init.o 00:02:51.354 CC lib/env_dpdk/pci.o 00:02:51.354 CC lib/env_dpdk/threads.o 00:02:51.354 CC lib/rdma_utils/rdma_utils.o 00:02:51.354 CC lib/json/json_parse.o 00:02:51.354 CC lib/vmd/vmd.o 00:02:51.354 CC lib/idxd/idxd.o 00:02:51.614 CC lib/idxd/idxd_user.o 00:02:51.614 LIB libspdk_conf.a 00:02:51.614 SO libspdk_conf.so.6.0 00:02:51.614 LIB libspdk_rdma_utils.a 00:02:51.614 SYMLINK libspdk_conf.so 00:02:51.873 SO libspdk_rdma_utils.so.1.0 00:02:51.873 CC lib/idxd/idxd_kernel.o 00:02:51.873 CC lib/json/json_util.o 00:02:51.873 SYMLINK libspdk_rdma_utils.so 00:02:51.873 CC lib/env_dpdk/pci_ioat.o 00:02:51.873 CC lib/env_dpdk/pci_virtio.o 00:02:51.873 CC lib/vmd/led.o 00:02:51.873 CC lib/env_dpdk/pci_vmd.o 00:02:51.873 CC lib/env_dpdk/pci_idxd.o 00:02:51.873 CC lib/json/json_write.o 00:02:51.873 CC lib/env_dpdk/pci_event.o 00:02:51.873 CC lib/env_dpdk/sigbus_handler.o 00:02:52.132 CC lib/env_dpdk/pci_dpdk.o 00:02:52.132 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:52.132 CC lib/rdma_provider/common.o 00:02:52.132 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:52.132 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:52.132 LIB libspdk_vmd.a 00:02:52.132 LIB libspdk_idxd.a 00:02:52.392 SO libspdk_vmd.so.6.0 00:02:52.392 LIB libspdk_json.a 00:02:52.392 SO libspdk_idxd.so.12.1 00:02:52.392 SO libspdk_json.so.6.0 00:02:52.392 SYMLINK libspdk_vmd.so 00:02:52.392 SYMLINK libspdk_idxd.so 00:02:52.392 LIB libspdk_rdma_provider.a 00:02:52.392 SYMLINK libspdk_json.so 00:02:52.392 SO libspdk_rdma_provider.so.7.0 00:02:52.392 SYMLINK libspdk_rdma_provider.so 00:02:52.960 CC lib/jsonrpc/jsonrpc_server.o 00:02:52.960 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:52.960 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:52.960 CC lib/jsonrpc/jsonrpc_client.o 00:02:53.219 LIB libspdk_jsonrpc.a 00:02:53.219 SO libspdk_jsonrpc.so.6.0 00:02:53.219 SYMLINK libspdk_jsonrpc.so 00:02:53.219 LIB libspdk_env_dpdk.a 00:02:53.478 SO libspdk_env_dpdk.so.15.1 00:02:53.478 SYMLINK libspdk_env_dpdk.so 00:02:53.478 CC lib/rpc/rpc.o 00:02:53.737 LIB libspdk_rpc.a 00:02:53.995 SO libspdk_rpc.so.6.0 00:02:53.995 SYMLINK libspdk_rpc.so 00:02:54.255 CC lib/keyring/keyring.o 00:02:54.255 CC lib/keyring/keyring_rpc.o 00:02:54.255 CC lib/notify/notify.o 00:02:54.255 CC lib/trace/trace.o 00:02:54.255 CC lib/notify/notify_rpc.o 00:02:54.255 CC lib/trace/trace_rpc.o 00:02:54.255 CC lib/trace/trace_flags.o 00:02:54.514 LIB libspdk_notify.a 00:02:54.514 SO libspdk_notify.so.6.0 00:02:54.514 LIB libspdk_keyring.a 00:02:54.514 SO libspdk_keyring.so.2.0 00:02:54.773 SYMLINK libspdk_notify.so 00:02:54.773 LIB libspdk_trace.a 00:02:54.773 SYMLINK libspdk_keyring.so 00:02:54.773 SO libspdk_trace.so.11.0 00:02:54.773 SYMLINK libspdk_trace.so 00:02:55.340 CC lib/sock/sock.o 00:02:55.340 CC lib/sock/sock_rpc.o 00:02:55.340 CC lib/thread/thread.o 00:02:55.340 CC lib/thread/iobuf.o 00:02:55.600 LIB libspdk_sock.a 00:02:55.600 SO libspdk_sock.so.10.0 00:02:55.858 SYMLINK libspdk_sock.so 00:02:56.116 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:56.116 CC lib/nvme/nvme_ns_cmd.o 00:02:56.116 CC lib/nvme/nvme_ctrlr.o 00:02:56.116 CC lib/nvme/nvme_pcie_common.o 00:02:56.116 CC lib/nvme/nvme_fabric.o 00:02:56.116 CC lib/nvme/nvme_ns.o 00:02:56.116 CC lib/nvme/nvme_qpair.o 00:02:56.116 CC lib/nvme/nvme_pcie.o 00:02:56.116 CC lib/nvme/nvme.o 00:02:57.049 CC lib/nvme/nvme_quirks.o 00:02:57.049 CC lib/nvme/nvme_transport.o 00:02:57.049 CC lib/nvme/nvme_discovery.o 00:02:57.049 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:57.049 LIB libspdk_thread.a 00:02:57.308 SO libspdk_thread.so.11.0 00:02:57.308 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:57.308 SYMLINK libspdk_thread.so 00:02:57.308 CC lib/nvme/nvme_tcp.o 00:02:57.566 CC lib/nvme/nvme_opal.o 00:02:57.566 CC lib/nvme/nvme_io_msg.o 00:02:57.825 CC lib/nvme/nvme_poll_group.o 00:02:57.825 CC lib/nvme/nvme_zns.o 00:02:57.825 CC lib/accel/accel.o 00:02:57.825 CC lib/nvme/nvme_stubs.o 00:02:57.825 CC lib/nvme/nvme_auth.o 00:02:58.084 CC lib/blob/blobstore.o 00:02:58.343 CC lib/blob/request.o 00:02:58.343 CC lib/init/json_config.o 00:02:58.343 CC lib/virtio/virtio.o 00:02:58.602 CC lib/blob/zeroes.o 00:02:58.602 CC lib/fsdev/fsdev.o 00:02:58.602 CC lib/init/subsystem.o 00:02:58.861 CC lib/nvme/nvme_cuse.o 00:02:58.861 CC lib/virtio/virtio_vhost_user.o 00:02:58.861 CC lib/nvme/nvme_rdma.o 00:02:58.861 CC lib/init/subsystem_rpc.o 00:02:59.120 CC lib/init/rpc.o 00:02:59.120 CC lib/accel/accel_rpc.o 00:02:59.120 CC lib/accel/accel_sw.o 00:02:59.120 CC lib/blob/blob_bs_dev.o 00:02:59.120 CC lib/virtio/virtio_vfio_user.o 00:02:59.120 LIB libspdk_init.a 00:02:59.378 CC lib/virtio/virtio_pci.o 00:02:59.378 CC lib/fsdev/fsdev_io.o 00:02:59.378 SO libspdk_init.so.6.0 00:02:59.378 SYMLINK libspdk_init.so 00:02:59.378 CC lib/fsdev/fsdev_rpc.o 00:02:59.378 LIB libspdk_accel.a 00:02:59.635 SO libspdk_accel.so.16.0 00:02:59.635 CC lib/event/app.o 00:02:59.635 CC lib/event/log_rpc.o 00:02:59.635 CC lib/event/app_rpc.o 00:02:59.635 CC lib/event/reactor.o 00:02:59.635 LIB libspdk_virtio.a 00:02:59.635 SYMLINK libspdk_accel.so 00:02:59.635 CC lib/event/scheduler_static.o 00:02:59.635 SO libspdk_virtio.so.7.0 00:02:59.635 LIB libspdk_fsdev.a 00:02:59.635 SO libspdk_fsdev.so.2.0 00:02:59.635 SYMLINK libspdk_virtio.so 00:02:59.953 SYMLINK libspdk_fsdev.so 00:02:59.953 CC lib/bdev/bdev_rpc.o 00:02:59.953 CC lib/bdev/bdev.o 00:02:59.953 CC lib/bdev/bdev_zone.o 00:02:59.953 CC lib/bdev/scsi_nvme.o 00:02:59.953 CC lib/bdev/part.o 00:03:00.231 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:00.231 LIB libspdk_event.a 00:03:00.231 SO libspdk_event.so.14.0 00:03:00.231 SYMLINK libspdk_event.so 00:03:00.801 LIB libspdk_nvme.a 00:03:00.801 SO libspdk_nvme.so.15.0 00:03:00.801 LIB libspdk_fuse_dispatcher.a 00:03:01.060 SO libspdk_fuse_dispatcher.so.1.0 00:03:01.060 SYMLINK libspdk_fuse_dispatcher.so 00:03:01.320 SYMLINK libspdk_nvme.so 00:03:02.259 LIB libspdk_blob.a 00:03:02.259 SO libspdk_blob.so.11.0 00:03:02.518 SYMLINK libspdk_blob.so 00:03:02.777 CC lib/blobfs/blobfs.o 00:03:02.777 CC lib/blobfs/tree.o 00:03:02.777 CC lib/lvol/lvol.o 00:03:03.345 LIB libspdk_bdev.a 00:03:03.345 SO libspdk_bdev.so.17.0 00:03:03.603 SYMLINK libspdk_bdev.so 00:03:03.900 CC lib/ublk/ublk.o 00:03:03.900 CC lib/ublk/ublk_rpc.o 00:03:03.900 CC lib/nvmf/ctrlr_discovery.o 00:03:03.900 CC lib/scsi/dev.o 00:03:03.900 CC lib/ftl/ftl_core.o 00:03:03.900 CC lib/scsi/lun.o 00:03:03.900 CC lib/nvmf/ctrlr.o 00:03:03.900 CC lib/nbd/nbd.o 00:03:03.900 LIB libspdk_blobfs.a 00:03:03.900 SO libspdk_blobfs.so.10.0 00:03:03.900 SYMLINK libspdk_blobfs.so 00:03:03.900 CC lib/nbd/nbd_rpc.o 00:03:03.900 LIB libspdk_lvol.a 00:03:03.900 CC lib/ftl/ftl_init.o 00:03:04.158 SO libspdk_lvol.so.10.0 00:03:04.158 CC lib/nvmf/ctrlr_bdev.o 00:03:04.158 SYMLINK libspdk_lvol.so 00:03:04.158 CC lib/ftl/ftl_layout.o 00:03:04.158 CC lib/ftl/ftl_debug.o 00:03:04.158 CC lib/ftl/ftl_io.o 00:03:04.158 CC lib/ftl/ftl_sb.o 00:03:04.416 CC lib/scsi/port.o 00:03:04.416 LIB libspdk_nbd.a 00:03:04.416 CC lib/ftl/ftl_l2p.o 00:03:04.416 SO libspdk_nbd.so.7.0 00:03:04.416 CC lib/nvmf/subsystem.o 00:03:04.416 CC lib/nvmf/nvmf.o 00:03:04.416 CC lib/nvmf/nvmf_rpc.o 00:03:04.416 SYMLINK libspdk_nbd.so 00:03:04.416 CC lib/nvmf/transport.o 00:03:04.416 CC lib/scsi/scsi.o 00:03:04.674 CC lib/nvmf/tcp.o 00:03:04.674 CC lib/ftl/ftl_l2p_flat.o 00:03:04.674 CC lib/scsi/scsi_bdev.o 00:03:04.674 LIB libspdk_ublk.a 00:03:04.933 SO libspdk_ublk.so.3.0 00:03:04.933 CC lib/ftl/ftl_nv_cache.o 00:03:04.933 SYMLINK libspdk_ublk.so 00:03:04.933 CC lib/nvmf/stubs.o 00:03:05.192 CC lib/nvmf/mdns_server.o 00:03:05.452 CC lib/nvmf/rdma.o 00:03:05.452 CC lib/scsi/scsi_pr.o 00:03:05.452 CC lib/ftl/ftl_band.o 00:03:05.452 CC lib/nvmf/auth.o 00:03:05.711 CC lib/scsi/scsi_rpc.o 00:03:05.711 CC lib/scsi/task.o 00:03:05.970 CC lib/ftl/ftl_band_ops.o 00:03:05.970 CC lib/ftl/ftl_writer.o 00:03:05.970 LIB libspdk_scsi.a 00:03:05.970 CC lib/ftl/ftl_rq.o 00:03:05.970 SO libspdk_scsi.so.9.0 00:03:05.970 CC lib/ftl/ftl_reloc.o 00:03:06.229 SYMLINK libspdk_scsi.so 00:03:06.229 CC lib/ftl/ftl_l2p_cache.o 00:03:06.229 CC lib/ftl/ftl_p2l.o 00:03:06.229 CC lib/ftl/ftl_p2l_log.o 00:03:06.229 CC lib/ftl/mngt/ftl_mngt.o 00:03:06.229 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:06.229 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:06.488 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:06.488 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:06.488 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:06.488 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:06.748 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:06.748 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:06.748 CC lib/iscsi/conn.o 00:03:06.748 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:06.748 CC lib/iscsi/init_grp.o 00:03:06.748 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:06.748 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:07.007 CC lib/vhost/vhost.o 00:03:07.007 CC lib/iscsi/iscsi.o 00:03:07.007 CC lib/vhost/vhost_rpc.o 00:03:07.007 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:07.007 CC lib/iscsi/param.o 00:03:07.007 CC lib/iscsi/portal_grp.o 00:03:07.266 CC lib/iscsi/tgt_node.o 00:03:07.266 CC lib/iscsi/iscsi_subsystem.o 00:03:07.526 CC lib/ftl/utils/ftl_conf.o 00:03:07.526 CC lib/iscsi/iscsi_rpc.o 00:03:07.526 CC lib/iscsi/task.o 00:03:07.526 CC lib/ftl/utils/ftl_md.o 00:03:07.526 CC lib/vhost/vhost_scsi.o 00:03:07.526 CC lib/vhost/vhost_blk.o 00:03:07.816 CC lib/vhost/rte_vhost_user.o 00:03:07.816 CC lib/ftl/utils/ftl_mempool.o 00:03:07.816 CC lib/ftl/utils/ftl_bitmap.o 00:03:07.816 CC lib/ftl/utils/ftl_property.o 00:03:08.073 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:08.073 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:08.073 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:08.073 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:08.331 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:08.331 LIB libspdk_nvmf.a 00:03:08.331 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:08.331 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:08.331 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:08.590 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:08.590 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:08.590 SO libspdk_nvmf.so.20.0 00:03:08.590 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:08.590 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:08.590 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:08.590 CC lib/ftl/base/ftl_base_dev.o 00:03:08.848 CC lib/ftl/base/ftl_base_bdev.o 00:03:08.848 LIB libspdk_iscsi.a 00:03:08.848 CC lib/ftl/ftl_trace.o 00:03:08.848 SYMLINK libspdk_nvmf.so 00:03:08.848 SO libspdk_iscsi.so.8.0 00:03:09.106 LIB libspdk_vhost.a 00:03:09.106 SYMLINK libspdk_iscsi.so 00:03:09.106 SO libspdk_vhost.so.8.0 00:03:09.106 LIB libspdk_ftl.a 00:03:09.364 SYMLINK libspdk_vhost.so 00:03:09.364 SO libspdk_ftl.so.9.0 00:03:09.930 SYMLINK libspdk_ftl.so 00:03:10.188 CC module/env_dpdk/env_dpdk_rpc.o 00:03:10.188 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:10.188 CC module/sock/posix/posix.o 00:03:10.188 CC module/fsdev/aio/fsdev_aio.o 00:03:10.188 CC module/keyring/linux/keyring.o 00:03:10.188 CC module/keyring/file/keyring.o 00:03:10.188 CC module/blob/bdev/blob_bdev.o 00:03:10.188 CC module/accel/error/accel_error.o 00:03:10.188 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:10.188 CC module/scheduler/gscheduler/gscheduler.o 00:03:10.445 LIB libspdk_env_dpdk_rpc.a 00:03:10.445 SO libspdk_env_dpdk_rpc.so.6.0 00:03:10.445 LIB libspdk_scheduler_dpdk_governor.a 00:03:10.445 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:10.445 CC module/keyring/file/keyring_rpc.o 00:03:10.445 CC module/keyring/linux/keyring_rpc.o 00:03:10.445 SYMLINK libspdk_env_dpdk_rpc.so 00:03:10.445 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:10.445 CC module/accel/error/accel_error_rpc.o 00:03:10.445 LIB libspdk_scheduler_gscheduler.a 00:03:10.445 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:10.445 SO libspdk_scheduler_gscheduler.so.4.0 00:03:10.445 LIB libspdk_scheduler_dynamic.a 00:03:10.445 SO libspdk_scheduler_dynamic.so.4.0 00:03:10.703 LIB libspdk_keyring_linux.a 00:03:10.703 LIB libspdk_keyring_file.a 00:03:10.703 SYMLINK libspdk_scheduler_gscheduler.so 00:03:10.703 SYMLINK libspdk_scheduler_dynamic.so 00:03:10.703 SO libspdk_keyring_linux.so.1.0 00:03:10.703 SO libspdk_keyring_file.so.2.0 00:03:10.703 CC module/fsdev/aio/linux_aio_mgr.o 00:03:10.703 LIB libspdk_blob_bdev.a 00:03:10.703 LIB libspdk_accel_error.a 00:03:10.703 SO libspdk_blob_bdev.so.11.0 00:03:10.703 SO libspdk_accel_error.so.2.0 00:03:10.703 CC module/accel/ioat/accel_ioat.o 00:03:10.703 SYMLINK libspdk_keyring_file.so 00:03:10.703 SYMLINK libspdk_keyring_linux.so 00:03:10.703 CC module/accel/ioat/accel_ioat_rpc.o 00:03:10.703 SYMLINK libspdk_blob_bdev.so 00:03:10.703 SYMLINK libspdk_accel_error.so 00:03:10.703 CC module/accel/dsa/accel_dsa.o 00:03:10.703 CC module/accel/dsa/accel_dsa_rpc.o 00:03:10.703 CC module/accel/iaa/accel_iaa.o 00:03:10.961 LIB libspdk_accel_ioat.a 00:03:10.961 SO libspdk_accel_ioat.so.6.0 00:03:10.961 SYMLINK libspdk_accel_ioat.so 00:03:10.961 CC module/bdev/delay/vbdev_delay.o 00:03:10.961 CC module/blobfs/bdev/blobfs_bdev.o 00:03:10.961 CC module/bdev/error/vbdev_error.o 00:03:10.961 CC module/bdev/gpt/gpt.o 00:03:10.961 CC module/accel/iaa/accel_iaa_rpc.o 00:03:11.219 LIB libspdk_fsdev_aio.a 00:03:11.219 CC module/bdev/lvol/vbdev_lvol.o 00:03:11.219 SO libspdk_fsdev_aio.so.1.0 00:03:11.219 CC module/bdev/malloc/bdev_malloc.o 00:03:11.219 LIB libspdk_accel_iaa.a 00:03:11.219 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:11.219 SO libspdk_accel_iaa.so.3.0 00:03:11.219 SYMLINK libspdk_fsdev_aio.so 00:03:11.219 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:11.219 CC module/bdev/gpt/vbdev_gpt.o 00:03:11.219 LIB libspdk_sock_posix.a 00:03:11.219 SYMLINK libspdk_accel_iaa.so 00:03:11.219 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:11.219 LIB libspdk_accel_dsa.a 00:03:11.219 CC module/bdev/error/vbdev_error_rpc.o 00:03:11.479 SO libspdk_sock_posix.so.6.0 00:03:11.479 SO libspdk_accel_dsa.so.5.0 00:03:11.479 LIB libspdk_blobfs_bdev.a 00:03:11.479 SYMLINK libspdk_accel_dsa.so 00:03:11.479 SYMLINK libspdk_sock_posix.so 00:03:11.479 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:11.479 SO libspdk_blobfs_bdev.so.6.0 00:03:11.479 LIB libspdk_bdev_delay.a 00:03:11.479 LIB libspdk_bdev_error.a 00:03:11.479 SYMLINK libspdk_blobfs_bdev.so 00:03:11.479 SO libspdk_bdev_delay.so.6.0 00:03:11.479 SO libspdk_bdev_error.so.6.0 00:03:11.479 LIB libspdk_bdev_gpt.a 00:03:11.479 SO libspdk_bdev_gpt.so.6.0 00:03:11.738 SYMLINK libspdk_bdev_delay.so 00:03:11.738 CC module/bdev/null/bdev_null.o 00:03:11.738 SYMLINK libspdk_bdev_error.so 00:03:11.738 CC module/bdev/null/bdev_null_rpc.o 00:03:11.738 CC module/bdev/nvme/bdev_nvme.o 00:03:11.738 SYMLINK libspdk_bdev_gpt.so 00:03:11.738 LIB libspdk_bdev_malloc.a 00:03:11.738 CC module/bdev/passthru/vbdev_passthru.o 00:03:11.738 SO libspdk_bdev_malloc.so.6.0 00:03:11.738 LIB libspdk_bdev_lvol.a 00:03:11.738 CC module/bdev/raid/bdev_raid.o 00:03:11.738 SO libspdk_bdev_lvol.so.6.0 00:03:11.738 SYMLINK libspdk_bdev_malloc.so 00:03:11.738 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:11.996 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:11.996 CC module/bdev/split/vbdev_split.o 00:03:11.996 CC module/bdev/aio/bdev_aio.o 00:03:11.996 SYMLINK libspdk_bdev_lvol.so 00:03:11.996 CC module/bdev/split/vbdev_split_rpc.o 00:03:11.996 LIB libspdk_bdev_null.a 00:03:11.996 SO libspdk_bdev_null.so.6.0 00:03:11.996 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:11.996 CC module/bdev/ftl/bdev_ftl.o 00:03:11.996 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:11.996 SYMLINK libspdk_bdev_null.so 00:03:12.255 LIB libspdk_bdev_split.a 00:03:12.255 SO libspdk_bdev_split.so.6.0 00:03:12.255 LIB libspdk_bdev_zone_block.a 00:03:12.255 LIB libspdk_bdev_passthru.a 00:03:12.255 SO libspdk_bdev_zone_block.so.6.0 00:03:12.255 CC module/bdev/iscsi/bdev_iscsi.o 00:03:12.255 SYMLINK libspdk_bdev_split.so 00:03:12.255 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:12.255 SO libspdk_bdev_passthru.so.6.0 00:03:12.255 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:12.255 CC module/bdev/aio/bdev_aio_rpc.o 00:03:12.255 SYMLINK libspdk_bdev_zone_block.so 00:03:12.513 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:12.513 CC module/bdev/raid/bdev_raid_rpc.o 00:03:12.513 SYMLINK libspdk_bdev_passthru.so 00:03:12.513 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:12.513 LIB libspdk_bdev_ftl.a 00:03:12.513 SO libspdk_bdev_ftl.so.6.0 00:03:12.513 CC module/bdev/raid/bdev_raid_sb.o 00:03:12.513 LIB libspdk_bdev_aio.a 00:03:12.513 SYMLINK libspdk_bdev_ftl.so 00:03:12.513 CC module/bdev/nvme/nvme_rpc.o 00:03:12.513 SO libspdk_bdev_aio.so.6.0 00:03:12.513 CC module/bdev/raid/raid0.o 00:03:12.770 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:12.770 SYMLINK libspdk_bdev_aio.so 00:03:12.770 CC module/bdev/raid/raid1.o 00:03:12.770 LIB libspdk_bdev_iscsi.a 00:03:12.770 SO libspdk_bdev_iscsi.so.6.0 00:03:12.770 CC module/bdev/raid/concat.o 00:03:13.029 CC module/bdev/nvme/bdev_mdns_client.o 00:03:13.029 CC module/bdev/nvme/vbdev_opal.o 00:03:13.029 SYMLINK libspdk_bdev_iscsi.so 00:03:13.029 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:13.029 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:13.029 LIB libspdk_bdev_virtio.a 00:03:13.029 CC module/bdev/raid/raid5f.o 00:03:13.029 SO libspdk_bdev_virtio.so.6.0 00:03:13.286 SYMLINK libspdk_bdev_virtio.so 00:03:13.852 LIB libspdk_bdev_raid.a 00:03:13.852 SO libspdk_bdev_raid.so.6.0 00:03:13.852 SYMLINK libspdk_bdev_raid.so 00:03:14.816 LIB libspdk_bdev_nvme.a 00:03:15.097 SO libspdk_bdev_nvme.so.7.1 00:03:15.097 SYMLINK libspdk_bdev_nvme.so 00:03:15.665 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:15.665 CC module/event/subsystems/keyring/keyring.o 00:03:15.665 CC module/event/subsystems/vmd/vmd.o 00:03:15.665 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:15.665 CC module/event/subsystems/iobuf/iobuf.o 00:03:15.665 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:15.665 CC module/event/subsystems/sock/sock.o 00:03:15.665 CC module/event/subsystems/fsdev/fsdev.o 00:03:15.923 CC module/event/subsystems/scheduler/scheduler.o 00:03:15.923 LIB libspdk_event_keyring.a 00:03:15.923 LIB libspdk_event_vhost_blk.a 00:03:15.923 LIB libspdk_event_fsdev.a 00:03:15.923 SO libspdk_event_keyring.so.1.0 00:03:15.923 SO libspdk_event_fsdev.so.1.0 00:03:15.923 SO libspdk_event_vhost_blk.so.3.0 00:03:15.923 LIB libspdk_event_vmd.a 00:03:15.923 LIB libspdk_event_scheduler.a 00:03:15.923 LIB libspdk_event_iobuf.a 00:03:15.923 LIB libspdk_event_sock.a 00:03:15.923 SO libspdk_event_vmd.so.6.0 00:03:15.923 SO libspdk_event_scheduler.so.4.0 00:03:15.923 SYMLINK libspdk_event_fsdev.so 00:03:15.923 SYMLINK libspdk_event_keyring.so 00:03:15.923 SYMLINK libspdk_event_vhost_blk.so 00:03:15.923 SO libspdk_event_sock.so.5.0 00:03:15.923 SO libspdk_event_iobuf.so.3.0 00:03:16.182 SYMLINK libspdk_event_scheduler.so 00:03:16.182 SYMLINK libspdk_event_sock.so 00:03:16.182 SYMLINK libspdk_event_iobuf.so 00:03:16.182 SYMLINK libspdk_event_vmd.so 00:03:16.440 CC module/event/subsystems/accel/accel.o 00:03:16.699 LIB libspdk_event_accel.a 00:03:16.699 SO libspdk_event_accel.so.6.0 00:03:16.699 SYMLINK libspdk_event_accel.so 00:03:17.266 CC module/event/subsystems/bdev/bdev.o 00:03:17.266 LIB libspdk_event_bdev.a 00:03:17.266 SO libspdk_event_bdev.so.6.0 00:03:17.524 SYMLINK libspdk_event_bdev.so 00:03:17.524 CC module/event/subsystems/ublk/ublk.o 00:03:17.783 CC module/event/subsystems/nbd/nbd.o 00:03:17.783 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:17.783 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:17.783 CC module/event/subsystems/scsi/scsi.o 00:03:17.783 LIB libspdk_event_scsi.a 00:03:17.783 LIB libspdk_event_ublk.a 00:03:17.783 LIB libspdk_event_nbd.a 00:03:17.783 SO libspdk_event_ublk.so.3.0 00:03:17.783 SO libspdk_event_scsi.so.6.0 00:03:17.783 SO libspdk_event_nbd.so.6.0 00:03:18.041 SYMLINK libspdk_event_ublk.so 00:03:18.041 LIB libspdk_event_nvmf.a 00:03:18.041 SYMLINK libspdk_event_scsi.so 00:03:18.041 SYMLINK libspdk_event_nbd.so 00:03:18.041 SO libspdk_event_nvmf.so.6.0 00:03:18.041 SYMLINK libspdk_event_nvmf.so 00:03:18.298 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:18.298 CC module/event/subsystems/iscsi/iscsi.o 00:03:18.298 LIB libspdk_event_vhost_scsi.a 00:03:18.298 LIB libspdk_event_iscsi.a 00:03:18.298 SO libspdk_event_vhost_scsi.so.3.0 00:03:18.557 SO libspdk_event_iscsi.so.6.0 00:03:18.557 SYMLINK libspdk_event_vhost_scsi.so 00:03:18.557 SYMLINK libspdk_event_iscsi.so 00:03:18.845 SO libspdk.so.6.0 00:03:18.845 SYMLINK libspdk.so 00:03:18.845 CC test/rpc_client/rpc_client_test.o 00:03:18.845 TEST_HEADER include/spdk/accel.h 00:03:18.845 TEST_HEADER include/spdk/accel_module.h 00:03:18.845 TEST_HEADER include/spdk/assert.h 00:03:18.845 TEST_HEADER include/spdk/barrier.h 00:03:18.845 TEST_HEADER include/spdk/base64.h 00:03:18.845 TEST_HEADER include/spdk/bdev.h 00:03:18.845 TEST_HEADER include/spdk/bdev_module.h 00:03:18.845 TEST_HEADER include/spdk/bdev_zone.h 00:03:18.845 CXX app/trace/trace.o 00:03:18.845 TEST_HEADER include/spdk/bit_array.h 00:03:18.845 TEST_HEADER include/spdk/bit_pool.h 00:03:18.845 TEST_HEADER include/spdk/blob_bdev.h 00:03:18.845 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:19.103 TEST_HEADER include/spdk/blobfs.h 00:03:19.103 TEST_HEADER include/spdk/blob.h 00:03:19.103 TEST_HEADER include/spdk/conf.h 00:03:19.103 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:19.103 TEST_HEADER include/spdk/config.h 00:03:19.103 TEST_HEADER include/spdk/cpuset.h 00:03:19.103 TEST_HEADER include/spdk/crc16.h 00:03:19.103 TEST_HEADER include/spdk/crc32.h 00:03:19.103 TEST_HEADER include/spdk/crc64.h 00:03:19.103 TEST_HEADER include/spdk/dif.h 00:03:19.103 TEST_HEADER include/spdk/dma.h 00:03:19.103 TEST_HEADER include/spdk/endian.h 00:03:19.103 TEST_HEADER include/spdk/env_dpdk.h 00:03:19.103 TEST_HEADER include/spdk/env.h 00:03:19.103 TEST_HEADER include/spdk/event.h 00:03:19.103 TEST_HEADER include/spdk/fd_group.h 00:03:19.103 TEST_HEADER include/spdk/fd.h 00:03:19.103 TEST_HEADER include/spdk/file.h 00:03:19.103 TEST_HEADER include/spdk/fsdev.h 00:03:19.103 TEST_HEADER include/spdk/fsdev_module.h 00:03:19.103 TEST_HEADER include/spdk/ftl.h 00:03:19.103 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:19.103 TEST_HEADER include/spdk/gpt_spec.h 00:03:19.103 CC examples/ioat/perf/perf.o 00:03:19.103 TEST_HEADER include/spdk/hexlify.h 00:03:19.103 TEST_HEADER include/spdk/histogram_data.h 00:03:19.103 CC examples/util/zipf/zipf.o 00:03:19.103 TEST_HEADER include/spdk/idxd.h 00:03:19.103 TEST_HEADER include/spdk/idxd_spec.h 00:03:19.103 TEST_HEADER include/spdk/init.h 00:03:19.103 TEST_HEADER include/spdk/ioat.h 00:03:19.103 TEST_HEADER include/spdk/ioat_spec.h 00:03:19.103 TEST_HEADER include/spdk/iscsi_spec.h 00:03:19.103 CC test/dma/test_dma/test_dma.o 00:03:19.103 TEST_HEADER include/spdk/json.h 00:03:19.103 TEST_HEADER include/spdk/jsonrpc.h 00:03:19.103 TEST_HEADER include/spdk/keyring.h 00:03:19.103 CC test/thread/poller_perf/poller_perf.o 00:03:19.103 TEST_HEADER include/spdk/keyring_module.h 00:03:19.103 TEST_HEADER include/spdk/likely.h 00:03:19.103 TEST_HEADER include/spdk/log.h 00:03:19.103 CC test/app/bdev_svc/bdev_svc.o 00:03:19.103 TEST_HEADER include/spdk/lvol.h 00:03:19.103 TEST_HEADER include/spdk/md5.h 00:03:19.103 TEST_HEADER include/spdk/memory.h 00:03:19.103 TEST_HEADER include/spdk/mmio.h 00:03:19.103 TEST_HEADER include/spdk/nbd.h 00:03:19.103 TEST_HEADER include/spdk/net.h 00:03:19.103 TEST_HEADER include/spdk/notify.h 00:03:19.103 TEST_HEADER include/spdk/nvme.h 00:03:19.103 TEST_HEADER include/spdk/nvme_intel.h 00:03:19.103 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:19.103 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:19.103 TEST_HEADER include/spdk/nvme_spec.h 00:03:19.103 TEST_HEADER include/spdk/nvme_zns.h 00:03:19.103 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:19.103 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:19.103 TEST_HEADER include/spdk/nvmf.h 00:03:19.103 TEST_HEADER include/spdk/nvmf_spec.h 00:03:19.103 TEST_HEADER include/spdk/nvmf_transport.h 00:03:19.103 TEST_HEADER include/spdk/opal.h 00:03:19.103 TEST_HEADER include/spdk/opal_spec.h 00:03:19.103 TEST_HEADER include/spdk/pci_ids.h 00:03:19.103 TEST_HEADER include/spdk/pipe.h 00:03:19.103 TEST_HEADER include/spdk/queue.h 00:03:19.103 TEST_HEADER include/spdk/reduce.h 00:03:19.103 CC test/env/mem_callbacks/mem_callbacks.o 00:03:19.103 TEST_HEADER include/spdk/rpc.h 00:03:19.103 TEST_HEADER include/spdk/scheduler.h 00:03:19.103 TEST_HEADER include/spdk/scsi.h 00:03:19.103 TEST_HEADER include/spdk/scsi_spec.h 00:03:19.103 TEST_HEADER include/spdk/sock.h 00:03:19.103 TEST_HEADER include/spdk/stdinc.h 00:03:19.103 TEST_HEADER include/spdk/string.h 00:03:19.103 TEST_HEADER include/spdk/thread.h 00:03:19.103 TEST_HEADER include/spdk/trace.h 00:03:19.103 LINK rpc_client_test 00:03:19.103 TEST_HEADER include/spdk/trace_parser.h 00:03:19.103 TEST_HEADER include/spdk/tree.h 00:03:19.104 TEST_HEADER include/spdk/ublk.h 00:03:19.104 TEST_HEADER include/spdk/util.h 00:03:19.104 TEST_HEADER include/spdk/uuid.h 00:03:19.104 TEST_HEADER include/spdk/version.h 00:03:19.104 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:19.104 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:19.104 TEST_HEADER include/spdk/vhost.h 00:03:19.104 TEST_HEADER include/spdk/vmd.h 00:03:19.104 TEST_HEADER include/spdk/xor.h 00:03:19.104 TEST_HEADER include/spdk/zipf.h 00:03:19.104 CXX test/cpp_headers/accel.o 00:03:19.362 LINK zipf 00:03:19.362 LINK poller_perf 00:03:19.362 LINK interrupt_tgt 00:03:19.362 LINK bdev_svc 00:03:19.362 LINK ioat_perf 00:03:19.362 CXX test/cpp_headers/accel_module.o 00:03:19.621 CXX test/cpp_headers/assert.o 00:03:19.621 CC test/env/vtophys/vtophys.o 00:03:19.621 LINK spdk_trace 00:03:19.621 CXX test/cpp_headers/barrier.o 00:03:19.621 CC examples/ioat/verify/verify.o 00:03:19.621 CC test/event/event_perf/event_perf.o 00:03:19.621 LINK test_dma 00:03:19.879 CC examples/thread/thread/thread_ex.o 00:03:19.879 LINK vtophys 00:03:19.879 CC examples/sock/hello_world/hello_sock.o 00:03:19.879 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:19.879 LINK mem_callbacks 00:03:19.879 CXX test/cpp_headers/base64.o 00:03:19.879 LINK event_perf 00:03:19.879 LINK verify 00:03:20.138 CC app/trace_record/trace_record.o 00:03:20.138 LINK thread 00:03:20.138 CXX test/cpp_headers/bdev.o 00:03:20.138 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:20.138 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:20.138 LINK hello_sock 00:03:20.396 CC app/nvmf_tgt/nvmf_main.o 00:03:20.396 CC test/event/reactor/reactor.o 00:03:20.396 LINK nvme_fuzz 00:03:20.396 LINK spdk_trace_record 00:03:20.396 CXX test/cpp_headers/bdev_module.o 00:03:20.396 LINK env_dpdk_post_init 00:03:20.396 LINK nvmf_tgt 00:03:20.654 CC test/blobfs/mkfs/mkfs.o 00:03:20.654 CC test/accel/dif/dif.o 00:03:20.654 CXX test/cpp_headers/bdev_zone.o 00:03:20.654 LINK reactor 00:03:20.654 CC examples/vmd/lsvmd/lsvmd.o 00:03:20.654 CC test/env/memory/memory_ut.o 00:03:20.654 CXX test/cpp_headers/bit_array.o 00:03:20.912 CC examples/idxd/perf/perf.o 00:03:20.912 LINK mkfs 00:03:20.912 CC test/lvol/esnap/esnap.o 00:03:20.912 CC app/iscsi_tgt/iscsi_tgt.o 00:03:20.912 LINK lsvmd 00:03:20.912 CC test/event/reactor_perf/reactor_perf.o 00:03:20.912 CXX test/cpp_headers/bit_pool.o 00:03:21.170 LINK reactor_perf 00:03:21.170 CXX test/cpp_headers/blob_bdev.o 00:03:21.170 LINK iscsi_tgt 00:03:21.170 CC examples/vmd/led/led.o 00:03:21.170 CC app/spdk_tgt/spdk_tgt.o 00:03:21.170 LINK idxd_perf 00:03:21.428 LINK led 00:03:21.428 CXX test/cpp_headers/blobfs_bdev.o 00:03:21.428 CC test/event/app_repeat/app_repeat.o 00:03:21.428 LINK dif 00:03:21.428 LINK spdk_tgt 00:03:21.428 CC app/spdk_lspci/spdk_lspci.o 00:03:21.686 CXX test/cpp_headers/blobfs.o 00:03:21.686 LINK app_repeat 00:03:21.686 CC test/env/pci/pci_ut.o 00:03:21.686 LINK spdk_lspci 00:03:21.686 CXX test/cpp_headers/blob.o 00:03:21.944 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:21.944 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:21.944 CXX test/cpp_headers/conf.o 00:03:21.944 CC test/event/scheduler/scheduler.o 00:03:21.944 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:22.201 CC app/spdk_nvme_perf/perf.o 00:03:22.201 CC app/spdk_nvme_identify/identify.o 00:03:22.201 LINK hello_fsdev 00:03:22.201 CXX test/cpp_headers/config.o 00:03:22.201 LINK pci_ut 00:03:22.201 CXX test/cpp_headers/cpuset.o 00:03:22.201 LINK memory_ut 00:03:22.201 LINK scheduler 00:03:22.459 CXX test/cpp_headers/crc16.o 00:03:22.459 CXX test/cpp_headers/crc32.o 00:03:22.718 CC examples/accel/perf/accel_perf.o 00:03:22.718 LINK vhost_fuzz 00:03:22.718 CC app/spdk_nvme_discover/discovery_aer.o 00:03:22.718 CC test/nvme/aer/aer.o 00:03:22.718 LINK iscsi_fuzz 00:03:22.718 CC test/nvme/reset/reset.o 00:03:22.718 CXX test/cpp_headers/crc64.o 00:03:22.976 LINK spdk_nvme_discover 00:03:22.976 CXX test/cpp_headers/dif.o 00:03:22.976 CC app/spdk_top/spdk_top.o 00:03:22.976 LINK reset 00:03:22.976 LINK aer 00:03:22.976 CXX test/cpp_headers/dma.o 00:03:22.976 CXX test/cpp_headers/endian.o 00:03:22.976 CC test/app/histogram_perf/histogram_perf.o 00:03:23.235 LINK accel_perf 00:03:23.235 LINK histogram_perf 00:03:23.235 CC test/nvme/sgl/sgl.o 00:03:23.235 CC test/nvme/e2edp/nvme_dp.o 00:03:23.235 CC test/app/jsoncat/jsoncat.o 00:03:23.235 CXX test/cpp_headers/env_dpdk.o 00:03:23.492 LINK jsoncat 00:03:23.492 CXX test/cpp_headers/env.o 00:03:23.492 CC test/nvme/overhead/overhead.o 00:03:23.492 LINK spdk_nvme_perf 00:03:23.492 LINK spdk_nvme_identify 00:03:23.492 LINK sgl 00:03:23.749 CC examples/blob/hello_world/hello_blob.o 00:03:23.749 LINK nvme_dp 00:03:23.749 CXX test/cpp_headers/event.o 00:03:23.749 CC test/app/stub/stub.o 00:03:23.749 CXX test/cpp_headers/fd_group.o 00:03:23.749 CXX test/cpp_headers/fd.o 00:03:23.749 LINK overhead 00:03:24.007 CC app/vhost/vhost.o 00:03:24.007 LINK hello_blob 00:03:24.007 LINK stub 00:03:24.007 CXX test/cpp_headers/file.o 00:03:24.007 CC examples/blob/cli/blobcli.o 00:03:24.007 CC test/nvme/err_injection/err_injection.o 00:03:24.007 CC test/bdev/bdevio/bdevio.o 00:03:24.007 LINK vhost 00:03:24.007 CXX test/cpp_headers/fsdev.o 00:03:24.007 CXX test/cpp_headers/fsdev_module.o 00:03:24.266 CC app/spdk_dd/spdk_dd.o 00:03:24.266 LINK spdk_top 00:03:24.266 LINK err_injection 00:03:24.266 CC app/fio/nvme/fio_plugin.o 00:03:24.266 CXX test/cpp_headers/ftl.o 00:03:24.266 CC test/nvme/startup/startup.o 00:03:24.568 CC examples/nvme/hello_world/hello_world.o 00:03:24.568 LINK bdevio 00:03:24.568 CXX test/cpp_headers/fuse_dispatcher.o 00:03:24.568 CC app/fio/bdev/fio_plugin.o 00:03:24.568 CC examples/bdev/hello_world/hello_bdev.o 00:03:24.568 LINK startup 00:03:24.568 LINK blobcli 00:03:24.568 LINK spdk_dd 00:03:24.851 CXX test/cpp_headers/gpt_spec.o 00:03:24.851 LINK hello_world 00:03:24.851 LINK hello_bdev 00:03:24.851 CC test/nvme/reserve/reserve.o 00:03:24.851 CC test/nvme/simple_copy/simple_copy.o 00:03:24.851 CXX test/cpp_headers/hexlify.o 00:03:24.851 CC test/nvme/connect_stress/connect_stress.o 00:03:24.851 CC test/nvme/boot_partition/boot_partition.o 00:03:24.851 LINK spdk_nvme 00:03:25.112 CC examples/nvme/reconnect/reconnect.o 00:03:25.112 CXX test/cpp_headers/histogram_data.o 00:03:25.112 CXX test/cpp_headers/idxd.o 00:03:25.112 LINK boot_partition 00:03:25.112 LINK reserve 00:03:25.112 LINK connect_stress 00:03:25.112 LINK spdk_bdev 00:03:25.112 LINK simple_copy 00:03:25.112 CC examples/bdev/bdevperf/bdevperf.o 00:03:25.112 CXX test/cpp_headers/idxd_spec.o 00:03:25.112 CXX test/cpp_headers/init.o 00:03:25.112 CXX test/cpp_headers/ioat.o 00:03:25.371 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:25.371 CC test/nvme/compliance/nvme_compliance.o 00:03:25.371 CC test/nvme/fused_ordering/fused_ordering.o 00:03:25.371 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:25.371 LINK reconnect 00:03:25.371 CXX test/cpp_headers/ioat_spec.o 00:03:25.371 CC test/nvme/fdp/fdp.o 00:03:25.371 CC test/nvme/cuse/cuse.o 00:03:25.631 LINK fused_ordering 00:03:25.631 LINK doorbell_aers 00:03:25.631 CXX test/cpp_headers/iscsi_spec.o 00:03:25.631 CC examples/nvme/arbitration/arbitration.o 00:03:25.631 LINK nvme_compliance 00:03:25.631 CXX test/cpp_headers/json.o 00:03:25.631 CXX test/cpp_headers/jsonrpc.o 00:03:25.890 CC examples/nvme/hotplug/hotplug.o 00:03:25.890 LINK fdp 00:03:25.890 LINK nvme_manage 00:03:25.890 CXX test/cpp_headers/keyring.o 00:03:25.890 CXX test/cpp_headers/keyring_module.o 00:03:25.890 CXX test/cpp_headers/likely.o 00:03:26.148 CXX test/cpp_headers/log.o 00:03:26.148 LINK arbitration 00:03:26.148 LINK hotplug 00:03:26.148 CXX test/cpp_headers/lvol.o 00:03:26.148 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:26.148 CC examples/nvme/abort/abort.o 00:03:26.148 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:26.148 CXX test/cpp_headers/md5.o 00:03:26.148 CXX test/cpp_headers/memory.o 00:03:26.148 CXX test/cpp_headers/mmio.o 00:03:26.148 CXX test/cpp_headers/nbd.o 00:03:26.148 CXX test/cpp_headers/net.o 00:03:26.148 LINK cmb_copy 00:03:26.406 LINK bdevperf 00:03:26.406 LINK pmr_persistence 00:03:26.406 CXX test/cpp_headers/notify.o 00:03:26.406 CXX test/cpp_headers/nvme.o 00:03:26.406 CXX test/cpp_headers/nvme_intel.o 00:03:26.406 CXX test/cpp_headers/nvme_ocssd.o 00:03:26.406 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:26.406 CXX test/cpp_headers/nvme_spec.o 00:03:26.406 CXX test/cpp_headers/nvme_zns.o 00:03:26.406 LINK abort 00:03:26.664 CXX test/cpp_headers/nvmf_cmd.o 00:03:26.664 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:26.664 CXX test/cpp_headers/nvmf.o 00:03:26.664 CXX test/cpp_headers/nvmf_spec.o 00:03:26.664 CXX test/cpp_headers/nvmf_transport.o 00:03:26.664 CXX test/cpp_headers/opal.o 00:03:26.664 CXX test/cpp_headers/opal_spec.o 00:03:26.664 CXX test/cpp_headers/pci_ids.o 00:03:26.664 CXX test/cpp_headers/pipe.o 00:03:26.923 CXX test/cpp_headers/queue.o 00:03:26.923 CXX test/cpp_headers/reduce.o 00:03:26.923 CXX test/cpp_headers/rpc.o 00:03:26.923 CXX test/cpp_headers/scheduler.o 00:03:26.923 CXX test/cpp_headers/scsi.o 00:03:26.923 CXX test/cpp_headers/scsi_spec.o 00:03:26.923 CC examples/nvmf/nvmf/nvmf.o 00:03:26.923 CXX test/cpp_headers/sock.o 00:03:26.923 CXX test/cpp_headers/stdinc.o 00:03:26.923 LINK cuse 00:03:26.923 CXX test/cpp_headers/string.o 00:03:26.923 CXX test/cpp_headers/thread.o 00:03:26.923 CXX test/cpp_headers/trace.o 00:03:26.923 CXX test/cpp_headers/trace_parser.o 00:03:27.182 CXX test/cpp_headers/tree.o 00:03:27.182 CXX test/cpp_headers/ublk.o 00:03:27.182 CXX test/cpp_headers/util.o 00:03:27.182 CXX test/cpp_headers/uuid.o 00:03:27.182 CXX test/cpp_headers/version.o 00:03:27.182 CXX test/cpp_headers/vfio_user_pci.o 00:03:27.182 CXX test/cpp_headers/vfio_user_spec.o 00:03:27.182 CXX test/cpp_headers/vhost.o 00:03:27.182 CXX test/cpp_headers/vmd.o 00:03:27.182 LINK nvmf 00:03:27.182 CXX test/cpp_headers/xor.o 00:03:27.458 CXX test/cpp_headers/zipf.o 00:03:28.393 LINK esnap 00:03:29.327 00:03:29.327 real 1m36.132s 00:03:29.327 user 8m59.982s 00:03:29.327 sys 1m48.786s 00:03:29.327 10:48:35 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:29.327 10:48:35 make -- common/autotest_common.sh@10 -- $ set +x 00:03:29.327 ************************************ 00:03:29.327 END TEST make 00:03:29.327 ************************************ 00:03:29.327 10:48:35 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:29.327 10:48:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:29.327 10:48:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:29.327 10:48:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.327 10:48:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:29.328 10:48:35 -- pm/common@44 -- $ pid=5461 00:03:29.328 10:48:35 -- pm/common@50 -- $ kill -TERM 5461 00:03:29.328 10:48:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.328 10:48:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:29.328 10:48:35 -- pm/common@44 -- $ pid=5462 00:03:29.328 10:48:35 -- pm/common@50 -- $ kill -TERM 5462 00:03:29.328 10:48:35 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:29.328 10:48:35 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:29.328 10:48:36 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:29.328 10:48:36 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:29.328 10:48:36 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:29.328 10:48:36 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:29.328 10:48:36 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:29.328 10:48:36 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:29.328 10:48:36 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:29.328 10:48:36 -- scripts/common.sh@336 -- # IFS=.-: 00:03:29.328 10:48:36 -- scripts/common.sh@336 -- # read -ra ver1 00:03:29.328 10:48:36 -- scripts/common.sh@337 -- # IFS=.-: 00:03:29.328 10:48:36 -- scripts/common.sh@337 -- # read -ra ver2 00:03:29.328 10:48:36 -- scripts/common.sh@338 -- # local 'op=<' 00:03:29.328 10:48:36 -- scripts/common.sh@340 -- # ver1_l=2 00:03:29.328 10:48:36 -- scripts/common.sh@341 -- # ver2_l=1 00:03:29.328 10:48:36 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:29.328 10:48:36 -- scripts/common.sh@344 -- # case "$op" in 00:03:29.328 10:48:36 -- scripts/common.sh@345 -- # : 1 00:03:29.328 10:48:36 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:29.328 10:48:36 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:29.328 10:48:36 -- scripts/common.sh@365 -- # decimal 1 00:03:29.328 10:48:36 -- scripts/common.sh@353 -- # local d=1 00:03:29.328 10:48:36 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:29.328 10:48:36 -- scripts/common.sh@355 -- # echo 1 00:03:29.328 10:48:36 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:29.328 10:48:36 -- scripts/common.sh@366 -- # decimal 2 00:03:29.328 10:48:36 -- scripts/common.sh@353 -- # local d=2 00:03:29.328 10:48:36 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:29.328 10:48:36 -- scripts/common.sh@355 -- # echo 2 00:03:29.328 10:48:36 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:29.328 10:48:36 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:29.328 10:48:36 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:29.328 10:48:36 -- scripts/common.sh@368 -- # return 0 00:03:29.328 10:48:36 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:29.328 10:48:36 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:29.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.328 --rc genhtml_branch_coverage=1 00:03:29.328 --rc genhtml_function_coverage=1 00:03:29.328 --rc genhtml_legend=1 00:03:29.328 --rc geninfo_all_blocks=1 00:03:29.328 --rc geninfo_unexecuted_blocks=1 00:03:29.328 00:03:29.328 ' 00:03:29.328 10:48:36 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:29.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.328 --rc genhtml_branch_coverage=1 00:03:29.328 --rc genhtml_function_coverage=1 00:03:29.328 --rc genhtml_legend=1 00:03:29.328 --rc geninfo_all_blocks=1 00:03:29.328 --rc geninfo_unexecuted_blocks=1 00:03:29.328 00:03:29.328 ' 00:03:29.328 10:48:36 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:29.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.328 --rc genhtml_branch_coverage=1 00:03:29.328 --rc genhtml_function_coverage=1 00:03:29.328 --rc genhtml_legend=1 00:03:29.328 --rc geninfo_all_blocks=1 00:03:29.328 --rc geninfo_unexecuted_blocks=1 00:03:29.328 00:03:29.328 ' 00:03:29.328 10:48:36 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:29.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.328 --rc genhtml_branch_coverage=1 00:03:29.328 --rc genhtml_function_coverage=1 00:03:29.328 --rc genhtml_legend=1 00:03:29.328 --rc geninfo_all_blocks=1 00:03:29.328 --rc geninfo_unexecuted_blocks=1 00:03:29.328 00:03:29.328 ' 00:03:29.328 10:48:36 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:29.328 10:48:36 -- nvmf/common.sh@7 -- # uname -s 00:03:29.328 10:48:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:29.328 10:48:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:29.328 10:48:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:29.328 10:48:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:29.328 10:48:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:29.328 10:48:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:29.328 10:48:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:29.328 10:48:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:29.328 10:48:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:29.328 10:48:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:29.328 10:48:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:25153ce7-b438-470b-af0e-c451b6522a73 00:03:29.328 10:48:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=25153ce7-b438-470b-af0e-c451b6522a73 00:03:29.328 10:48:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:29.328 10:48:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:29.328 10:48:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:29.328 10:48:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:29.328 10:48:36 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:29.328 10:48:36 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:29.328 10:48:36 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:29.328 10:48:36 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:29.328 10:48:36 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:29.328 10:48:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:29.328 10:48:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:29.328 10:48:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:29.328 10:48:36 -- paths/export.sh@5 -- # export PATH 00:03:29.328 10:48:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:29.328 10:48:36 -- nvmf/common.sh@51 -- # : 0 00:03:29.328 10:48:36 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:29.328 10:48:36 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:29.328 10:48:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:29.328 10:48:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:29.328 10:48:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:29.328 10:48:36 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:29.328 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:29.328 10:48:36 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:29.328 10:48:36 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:29.328 10:48:36 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:29.328 10:48:36 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:29.328 10:48:36 -- spdk/autotest.sh@32 -- # uname -s 00:03:29.328 10:48:36 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:29.328 10:48:36 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:29.328 10:48:36 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:29.328 10:48:36 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:29.328 10:48:36 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:29.328 10:48:36 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:29.587 10:48:36 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:29.587 10:48:36 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:29.587 10:48:36 -- spdk/autotest.sh@48 -- # udevadm_pid=54534 00:03:29.587 10:48:36 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:29.587 10:48:36 -- pm/common@17 -- # local monitor 00:03:29.587 10:48:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.587 10:48:36 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:29.587 10:48:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.587 10:48:36 -- pm/common@21 -- # date +%s 00:03:29.587 10:48:36 -- pm/common@25 -- # sleep 1 00:03:29.587 10:48:36 -- pm/common@21 -- # date +%s 00:03:29.587 10:48:36 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731667716 00:03:29.587 10:48:36 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731667716 00:03:29.587 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731667716_collect-vmstat.pm.log 00:03:29.587 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731667716_collect-cpu-load.pm.log 00:03:30.545 10:48:37 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:30.545 10:48:37 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:30.545 10:48:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:30.545 10:48:37 -- common/autotest_common.sh@10 -- # set +x 00:03:30.545 10:48:37 -- spdk/autotest.sh@59 -- # create_test_list 00:03:30.545 10:48:37 -- common/autotest_common.sh@750 -- # xtrace_disable 00:03:30.545 10:48:37 -- common/autotest_common.sh@10 -- # set +x 00:03:30.545 10:48:37 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:30.545 10:48:37 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:30.545 10:48:37 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:30.545 10:48:37 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:30.545 10:48:37 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:30.545 10:48:37 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:30.545 10:48:37 -- common/autotest_common.sh@1455 -- # uname 00:03:30.545 10:48:37 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:30.545 10:48:37 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:30.545 10:48:37 -- common/autotest_common.sh@1475 -- # uname 00:03:30.545 10:48:37 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:30.545 10:48:37 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:30.545 10:48:37 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:30.545 lcov: LCOV version 1.15 00:03:30.819 10:48:37 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:45.699 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:45.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:03.801 10:49:08 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:03.801 10:49:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:03.801 10:49:08 -- common/autotest_common.sh@10 -- # set +x 00:04:03.801 10:49:08 -- spdk/autotest.sh@78 -- # rm -f 00:04:03.801 10:49:08 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:03.801 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.801 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:03.801 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:03.801 10:49:09 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:03.801 10:49:09 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:03.801 10:49:09 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:03.801 10:49:09 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:03.801 10:49:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:03.801 10:49:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:03.801 10:49:09 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:03.801 10:49:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:03.801 10:49:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:03.801 10:49:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:03.801 10:49:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:03.801 10:49:09 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:03.801 10:49:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:03.801 10:49:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:03.801 10:49:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:03.801 10:49:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:04:03.801 10:49:09 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:04:03.801 10:49:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:03.801 10:49:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:03.801 10:49:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:03.801 10:49:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:04:03.801 10:49:09 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:04:03.801 10:49:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:03.801 10:49:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:03.801 10:49:09 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:03.801 10:49:09 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.801 10:49:09 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:03.801 10:49:09 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:03.801 10:49:09 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:03.801 10:49:09 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:03.801 No valid GPT data, bailing 00:04:03.801 10:49:09 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:03.801 10:49:09 -- scripts/common.sh@394 -- # pt= 00:04:03.801 10:49:09 -- scripts/common.sh@395 -- # return 1 00:04:03.801 10:49:09 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:03.801 1+0 records in 00:04:03.801 1+0 records out 00:04:03.801 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00631616 s, 166 MB/s 00:04:03.801 10:49:09 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.801 10:49:09 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:03.801 10:49:09 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:03.801 10:49:09 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:03.801 10:49:09 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:03.801 No valid GPT data, bailing 00:04:03.801 10:49:09 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:03.801 10:49:09 -- scripts/common.sh@394 -- # pt= 00:04:03.801 10:49:09 -- scripts/common.sh@395 -- # return 1 00:04:03.801 10:49:09 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:03.801 1+0 records in 00:04:03.801 1+0 records out 00:04:03.801 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00520839 s, 201 MB/s 00:04:03.801 10:49:09 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.801 10:49:09 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:03.801 10:49:09 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:03.801 10:49:09 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:03.801 10:49:09 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:03.801 No valid GPT data, bailing 00:04:03.801 10:49:09 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:03.801 10:49:09 -- scripts/common.sh@394 -- # pt= 00:04:03.801 10:49:09 -- scripts/common.sh@395 -- # return 1 00:04:03.801 10:49:09 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:03.801 1+0 records in 00:04:03.801 1+0 records out 00:04:03.801 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00365111 s, 287 MB/s 00:04:03.801 10:49:09 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.801 10:49:09 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:03.801 10:49:09 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:03.801 10:49:09 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:03.801 10:49:09 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:03.801 No valid GPT data, bailing 00:04:03.801 10:49:09 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:03.801 10:49:09 -- scripts/common.sh@394 -- # pt= 00:04:03.801 10:49:09 -- scripts/common.sh@395 -- # return 1 00:04:03.801 10:49:09 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:03.801 1+0 records in 00:04:03.801 1+0 records out 00:04:03.801 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00386735 s, 271 MB/s 00:04:03.801 10:49:09 -- spdk/autotest.sh@105 -- # sync 00:04:03.802 10:49:09 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:03.802 10:49:09 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:03.802 10:49:09 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:05.182 10:49:12 -- spdk/autotest.sh@111 -- # uname -s 00:04:05.182 10:49:12 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:05.182 10:49:12 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:05.182 10:49:12 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:06.121 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.121 Hugepages 00:04:06.121 node hugesize free / total 00:04:06.121 node0 1048576kB 0 / 0 00:04:06.121 node0 2048kB 0 / 0 00:04:06.121 00:04:06.121 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:06.121 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:06.121 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:06.379 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:06.379 10:49:13 -- spdk/autotest.sh@117 -- # uname -s 00:04:06.379 10:49:13 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:06.379 10:49:13 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:06.379 10:49:13 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:06.946 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:07.205 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.205 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.205 10:49:14 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:08.140 10:49:15 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:08.140 10:49:15 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:08.140 10:49:15 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:08.140 10:49:15 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:08.140 10:49:15 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:08.140 10:49:15 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:08.140 10:49:15 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:08.140 10:49:15 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:08.140 10:49:15 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:08.398 10:49:15 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:08.398 10:49:15 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:08.398 10:49:15 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:08.657 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:08.916 Waiting for block devices as requested 00:04:08.916 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:08.916 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:08.916 10:49:15 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:08.916 10:49:15 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:08.916 10:49:15 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:08.916 10:49:15 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:09.175 10:49:15 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:09.175 10:49:15 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:09.175 10:49:15 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:09.175 10:49:15 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:09.175 10:49:15 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:09.175 10:49:15 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:09.175 10:49:15 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:09.175 10:49:15 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:09.175 10:49:15 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:09.175 10:49:15 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:09.175 10:49:15 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:09.175 10:49:15 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:09.175 10:49:15 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:09.175 10:49:15 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:09.175 10:49:15 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:09.175 10:49:15 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:09.175 10:49:15 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:09.175 10:49:15 -- common/autotest_common.sh@1541 -- # continue 00:04:09.175 10:49:15 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:09.175 10:49:15 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:09.175 10:49:15 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:09.175 10:49:15 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:09.175 10:49:15 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:09.175 10:49:15 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:09.175 10:49:15 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:09.175 10:49:15 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:09.175 10:49:15 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:09.175 10:49:15 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:09.175 10:49:15 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:09.175 10:49:15 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:09.175 10:49:15 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:09.175 10:49:15 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:09.175 10:49:15 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:09.175 10:49:15 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:09.175 10:49:15 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:09.175 10:49:15 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:09.175 10:49:15 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:09.175 10:49:15 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:09.175 10:49:15 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:09.175 10:49:15 -- common/autotest_common.sh@1541 -- # continue 00:04:09.175 10:49:15 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:09.175 10:49:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:09.175 10:49:15 -- common/autotest_common.sh@10 -- # set +x 00:04:09.175 10:49:15 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:09.175 10:49:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:09.175 10:49:15 -- common/autotest_common.sh@10 -- # set +x 00:04:09.175 10:49:15 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:10.107 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:10.107 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.107 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.107 10:49:16 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:10.107 10:49:16 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:10.107 10:49:16 -- common/autotest_common.sh@10 -- # set +x 00:04:10.107 10:49:17 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:10.107 10:49:17 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:10.107 10:49:17 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:10.107 10:49:17 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:10.107 10:49:17 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:10.107 10:49:17 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:10.107 10:49:17 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:10.107 10:49:17 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:10.107 10:49:17 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:10.107 10:49:17 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:10.107 10:49:17 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:10.366 10:49:17 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:10.366 10:49:17 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:10.366 10:49:17 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:10.366 10:49:17 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:10.366 10:49:17 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:10.366 10:49:17 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:10.366 10:49:17 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:10.366 10:49:17 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:10.366 10:49:17 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:10.366 10:49:17 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:10.366 10:49:17 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:10.366 10:49:17 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:10.366 10:49:17 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:10.366 10:49:17 -- common/autotest_common.sh@1570 -- # return 0 00:04:10.366 10:49:17 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:10.366 10:49:17 -- common/autotest_common.sh@1578 -- # return 0 00:04:10.366 10:49:17 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:10.366 10:49:17 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:10.366 10:49:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:10.366 10:49:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:10.366 10:49:17 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:10.366 10:49:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:10.366 10:49:17 -- common/autotest_common.sh@10 -- # set +x 00:04:10.366 10:49:17 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:10.366 10:49:17 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:10.366 10:49:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:10.366 10:49:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:10.366 10:49:17 -- common/autotest_common.sh@10 -- # set +x 00:04:10.366 ************************************ 00:04:10.366 START TEST env 00:04:10.366 ************************************ 00:04:10.366 10:49:17 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:10.366 * Looking for test storage... 00:04:10.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:10.366 10:49:17 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:10.366 10:49:17 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:10.366 10:49:17 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:10.664 10:49:17 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:10.664 10:49:17 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:10.664 10:49:17 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:10.664 10:49:17 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:10.664 10:49:17 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.664 10:49:17 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:10.664 10:49:17 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:10.664 10:49:17 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:10.664 10:49:17 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:10.664 10:49:17 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:10.664 10:49:17 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:10.664 10:49:17 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:10.664 10:49:17 env -- scripts/common.sh@344 -- # case "$op" in 00:04:10.664 10:49:17 env -- scripts/common.sh@345 -- # : 1 00:04:10.664 10:49:17 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:10.664 10:49:17 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.664 10:49:17 env -- scripts/common.sh@365 -- # decimal 1 00:04:10.664 10:49:17 env -- scripts/common.sh@353 -- # local d=1 00:04:10.664 10:49:17 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.664 10:49:17 env -- scripts/common.sh@355 -- # echo 1 00:04:10.664 10:49:17 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:10.664 10:49:17 env -- scripts/common.sh@366 -- # decimal 2 00:04:10.664 10:49:17 env -- scripts/common.sh@353 -- # local d=2 00:04:10.664 10:49:17 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.664 10:49:17 env -- scripts/common.sh@355 -- # echo 2 00:04:10.664 10:49:17 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:10.664 10:49:17 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:10.664 10:49:17 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:10.664 10:49:17 env -- scripts/common.sh@368 -- # return 0 00:04:10.664 10:49:17 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.664 10:49:17 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:10.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.664 --rc genhtml_branch_coverage=1 00:04:10.664 --rc genhtml_function_coverage=1 00:04:10.664 --rc genhtml_legend=1 00:04:10.664 --rc geninfo_all_blocks=1 00:04:10.664 --rc geninfo_unexecuted_blocks=1 00:04:10.664 00:04:10.664 ' 00:04:10.664 10:49:17 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:10.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.664 --rc genhtml_branch_coverage=1 00:04:10.664 --rc genhtml_function_coverage=1 00:04:10.664 --rc genhtml_legend=1 00:04:10.664 --rc geninfo_all_blocks=1 00:04:10.664 --rc geninfo_unexecuted_blocks=1 00:04:10.664 00:04:10.664 ' 00:04:10.664 10:49:17 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:10.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.664 --rc genhtml_branch_coverage=1 00:04:10.664 --rc genhtml_function_coverage=1 00:04:10.664 --rc genhtml_legend=1 00:04:10.664 --rc geninfo_all_blocks=1 00:04:10.664 --rc geninfo_unexecuted_blocks=1 00:04:10.664 00:04:10.664 ' 00:04:10.664 10:49:17 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:10.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.664 --rc genhtml_branch_coverage=1 00:04:10.664 --rc genhtml_function_coverage=1 00:04:10.664 --rc genhtml_legend=1 00:04:10.664 --rc geninfo_all_blocks=1 00:04:10.664 --rc geninfo_unexecuted_blocks=1 00:04:10.664 00:04:10.664 ' 00:04:10.664 10:49:17 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:10.664 10:49:17 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:10.664 10:49:17 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:10.664 10:49:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.664 ************************************ 00:04:10.664 START TEST env_memory 00:04:10.664 ************************************ 00:04:10.664 10:49:17 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:10.664 00:04:10.664 00:04:10.664 CUnit - A unit testing framework for C - Version 2.1-3 00:04:10.664 http://cunit.sourceforge.net/ 00:04:10.664 00:04:10.664 00:04:10.664 Suite: memory 00:04:10.664 Test: alloc and free memory map ...[2024-11-15 10:49:17.445494] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:10.664 passed 00:04:10.664 Test: mem map translation ...[2024-11-15 10:49:17.497779] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:10.664 [2024-11-15 10:49:17.497860] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:10.664 [2024-11-15 10:49:17.497957] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:10.664 [2024-11-15 10:49:17.497982] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:10.664 passed 00:04:10.664 Test: mem map registration ...[2024-11-15 10:49:17.577445] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:10.664 [2024-11-15 10:49:17.577543] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:10.922 passed 00:04:10.922 Test: mem map adjacent registrations ...passed 00:04:10.922 00:04:10.922 Run Summary: Type Total Ran Passed Failed Inactive 00:04:10.922 suites 1 1 n/a 0 0 00:04:10.922 tests 4 4 4 0 0 00:04:10.922 asserts 152 152 152 0 n/a 00:04:10.922 00:04:10.922 Elapsed time = 0.283 seconds 00:04:10.922 00:04:10.922 real 0m0.337s 00:04:10.922 user 0m0.291s 00:04:10.922 sys 0m0.035s 00:04:10.922 10:49:17 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:10.922 10:49:17 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:10.922 ************************************ 00:04:10.922 END TEST env_memory 00:04:10.922 ************************************ 00:04:10.922 10:49:17 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:10.922 10:49:17 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:10.922 10:49:17 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:10.922 10:49:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.922 ************************************ 00:04:10.922 START TEST env_vtophys 00:04:10.922 ************************************ 00:04:10.922 10:49:17 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:10.922 EAL: lib.eal log level changed from notice to debug 00:04:10.922 EAL: Detected lcore 0 as core 0 on socket 0 00:04:10.922 EAL: Detected lcore 1 as core 0 on socket 0 00:04:10.922 EAL: Detected lcore 2 as core 0 on socket 0 00:04:10.922 EAL: Detected lcore 3 as core 0 on socket 0 00:04:10.923 EAL: Detected lcore 4 as core 0 on socket 0 00:04:10.923 EAL: Detected lcore 5 as core 0 on socket 0 00:04:10.923 EAL: Detected lcore 6 as core 0 on socket 0 00:04:10.923 EAL: Detected lcore 7 as core 0 on socket 0 00:04:10.923 EAL: Detected lcore 8 as core 0 on socket 0 00:04:10.923 EAL: Detected lcore 9 as core 0 on socket 0 00:04:10.923 EAL: Maximum logical cores by configuration: 128 00:04:10.923 EAL: Detected CPU lcores: 10 00:04:10.923 EAL: Detected NUMA nodes: 1 00:04:10.923 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:10.923 EAL: Detected shared linkage of DPDK 00:04:11.180 EAL: No shared files mode enabled, IPC will be disabled 00:04:11.180 EAL: Selected IOVA mode 'PA' 00:04:11.180 EAL: Probing VFIO support... 00:04:11.180 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:11.180 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:11.180 EAL: Ask a virtual area of 0x2e000 bytes 00:04:11.180 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:11.181 EAL: Setting up physically contiguous memory... 00:04:11.181 EAL: Setting maximum number of open files to 524288 00:04:11.181 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:11.181 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:11.181 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.181 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:11.181 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:11.181 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.181 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:11.181 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:11.181 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.181 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:11.181 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:11.181 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.181 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:11.181 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:11.181 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.181 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:11.181 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:11.181 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.181 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:11.181 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:11.181 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.181 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:11.181 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:11.181 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.181 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:11.181 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:11.181 EAL: Hugepages will be freed exactly as allocated. 00:04:11.181 EAL: No shared files mode enabled, IPC is disabled 00:04:11.181 EAL: No shared files mode enabled, IPC is disabled 00:04:11.181 EAL: TSC frequency is ~2290000 KHz 00:04:11.181 EAL: Main lcore 0 is ready (tid=7f1f4609ba40;cpuset=[0]) 00:04:11.181 EAL: Trying to obtain current memory policy. 00:04:11.181 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.181 EAL: Restoring previous memory policy: 0 00:04:11.181 EAL: request: mp_malloc_sync 00:04:11.181 EAL: No shared files mode enabled, IPC is disabled 00:04:11.181 EAL: Heap on socket 0 was expanded by 2MB 00:04:11.181 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:11.181 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:11.181 EAL: Mem event callback 'spdk:(nil)' registered 00:04:11.181 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:11.181 00:04:11.181 00:04:11.181 CUnit - A unit testing framework for C - Version 2.1-3 00:04:11.181 http://cunit.sourceforge.net/ 00:04:11.181 00:04:11.181 00:04:11.181 Suite: components_suite 00:04:11.747 Test: vtophys_malloc_test ...passed 00:04:11.747 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:11.747 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.747 EAL: Restoring previous memory policy: 4 00:04:11.747 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.747 EAL: request: mp_malloc_sync 00:04:11.747 EAL: No shared files mode enabled, IPC is disabled 00:04:11.747 EAL: Heap on socket 0 was expanded by 4MB 00:04:11.747 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.747 EAL: request: mp_malloc_sync 00:04:11.747 EAL: No shared files mode enabled, IPC is disabled 00:04:11.747 EAL: Heap on socket 0 was shrunk by 4MB 00:04:11.747 EAL: Trying to obtain current memory policy. 00:04:11.747 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.747 EAL: Restoring previous memory policy: 4 00:04:11.747 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.747 EAL: request: mp_malloc_sync 00:04:11.747 EAL: No shared files mode enabled, IPC is disabled 00:04:11.747 EAL: Heap on socket 0 was expanded by 6MB 00:04:11.747 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.747 EAL: request: mp_malloc_sync 00:04:11.747 EAL: No shared files mode enabled, IPC is disabled 00:04:11.747 EAL: Heap on socket 0 was shrunk by 6MB 00:04:11.747 EAL: Trying to obtain current memory policy. 00:04:11.747 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.747 EAL: Restoring previous memory policy: 4 00:04:11.747 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.747 EAL: request: mp_malloc_sync 00:04:11.747 EAL: No shared files mode enabled, IPC is disabled 00:04:11.747 EAL: Heap on socket 0 was expanded by 10MB 00:04:11.747 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.747 EAL: request: mp_malloc_sync 00:04:11.747 EAL: No shared files mode enabled, IPC is disabled 00:04:11.747 EAL: Heap on socket 0 was shrunk by 10MB 00:04:11.747 EAL: Trying to obtain current memory policy. 00:04:11.747 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.747 EAL: Restoring previous memory policy: 4 00:04:11.747 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.747 EAL: request: mp_malloc_sync 00:04:11.747 EAL: No shared files mode enabled, IPC is disabled 00:04:11.747 EAL: Heap on socket 0 was expanded by 18MB 00:04:11.747 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.747 EAL: request: mp_malloc_sync 00:04:11.747 EAL: No shared files mode enabled, IPC is disabled 00:04:11.747 EAL: Heap on socket 0 was shrunk by 18MB 00:04:11.747 EAL: Trying to obtain current memory policy. 00:04:11.747 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.747 EAL: Restoring previous memory policy: 4 00:04:11.747 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.747 EAL: request: mp_malloc_sync 00:04:11.747 EAL: No shared files mode enabled, IPC is disabled 00:04:11.747 EAL: Heap on socket 0 was expanded by 34MB 00:04:11.747 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.747 EAL: request: mp_malloc_sync 00:04:11.747 EAL: No shared files mode enabled, IPC is disabled 00:04:11.747 EAL: Heap on socket 0 was shrunk by 34MB 00:04:12.005 EAL: Trying to obtain current memory policy. 00:04:12.005 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.005 EAL: Restoring previous memory policy: 4 00:04:12.005 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.005 EAL: request: mp_malloc_sync 00:04:12.005 EAL: No shared files mode enabled, IPC is disabled 00:04:12.005 EAL: Heap on socket 0 was expanded by 66MB 00:04:12.005 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.005 EAL: request: mp_malloc_sync 00:04:12.005 EAL: No shared files mode enabled, IPC is disabled 00:04:12.005 EAL: Heap on socket 0 was shrunk by 66MB 00:04:12.263 EAL: Trying to obtain current memory policy. 00:04:12.263 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.263 EAL: Restoring previous memory policy: 4 00:04:12.263 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.263 EAL: request: mp_malloc_sync 00:04:12.263 EAL: No shared files mode enabled, IPC is disabled 00:04:12.263 EAL: Heap on socket 0 was expanded by 130MB 00:04:12.520 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.520 EAL: request: mp_malloc_sync 00:04:12.521 EAL: No shared files mode enabled, IPC is disabled 00:04:12.521 EAL: Heap on socket 0 was shrunk by 130MB 00:04:12.778 EAL: Trying to obtain current memory policy. 00:04:12.778 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.778 EAL: Restoring previous memory policy: 4 00:04:12.778 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.778 EAL: request: mp_malloc_sync 00:04:12.778 EAL: No shared files mode enabled, IPC is disabled 00:04:12.778 EAL: Heap on socket 0 was expanded by 258MB 00:04:13.364 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.364 EAL: request: mp_malloc_sync 00:04:13.364 EAL: No shared files mode enabled, IPC is disabled 00:04:13.364 EAL: Heap on socket 0 was shrunk by 258MB 00:04:13.929 EAL: Trying to obtain current memory policy. 00:04:13.929 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.929 EAL: Restoring previous memory policy: 4 00:04:13.929 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.929 EAL: request: mp_malloc_sync 00:04:13.929 EAL: No shared files mode enabled, IPC is disabled 00:04:13.929 EAL: Heap on socket 0 was expanded by 514MB 00:04:15.300 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.300 EAL: request: mp_malloc_sync 00:04:15.300 EAL: No shared files mode enabled, IPC is disabled 00:04:15.300 EAL: Heap on socket 0 was shrunk by 514MB 00:04:16.233 EAL: Trying to obtain current memory policy. 00:04:16.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.490 EAL: Restoring previous memory policy: 4 00:04:16.490 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.490 EAL: request: mp_malloc_sync 00:04:16.490 EAL: No shared files mode enabled, IPC is disabled 00:04:16.490 EAL: Heap on socket 0 was expanded by 1026MB 00:04:18.395 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.680 EAL: request: mp_malloc_sync 00:04:18.680 EAL: No shared files mode enabled, IPC is disabled 00:04:18.680 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:21.214 passed 00:04:21.214 00:04:21.214 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.214 suites 1 1 n/a 0 0 00:04:21.214 tests 2 2 2 0 0 00:04:21.214 asserts 5838 5838 5838 0 n/a 00:04:21.214 00:04:21.214 Elapsed time = 9.472 seconds 00:04:21.214 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.214 EAL: request: mp_malloc_sync 00:04:21.214 EAL: No shared files mode enabled, IPC is disabled 00:04:21.214 EAL: Heap on socket 0 was shrunk by 2MB 00:04:21.214 EAL: No shared files mode enabled, IPC is disabled 00:04:21.214 EAL: No shared files mode enabled, IPC is disabled 00:04:21.214 EAL: No shared files mode enabled, IPC is disabled 00:04:21.214 00:04:21.214 real 0m9.811s 00:04:21.214 user 0m8.764s 00:04:21.214 sys 0m0.872s 00:04:21.214 10:49:27 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:21.214 10:49:27 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:21.214 ************************************ 00:04:21.214 END TEST env_vtophys 00:04:21.214 ************************************ 00:04:21.214 10:49:27 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:21.214 10:49:27 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:21.214 10:49:27 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:21.214 10:49:27 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.214 ************************************ 00:04:21.214 START TEST env_pci 00:04:21.214 ************************************ 00:04:21.214 10:49:27 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:21.214 00:04:21.214 00:04:21.214 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.214 http://cunit.sourceforge.net/ 00:04:21.214 00:04:21.214 00:04:21.214 Suite: pci 00:04:21.214 Test: pci_hook ...[2024-11-15 10:49:27.683964] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56875 has claimed it 00:04:21.214 passed 00:04:21.214 00:04:21.214 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.214 suites 1 1 n/a 0 0 00:04:21.214 tests 1 1 1 0 0 00:04:21.214 asserts 25 25 25 0 n/a 00:04:21.214 00:04:21.214 Elapsed time = 0.006 seconds 00:04:21.214 EAL: Cannot find device (10000:00:01.0) 00:04:21.214 EAL: Failed to attach device on primary process 00:04:21.214 00:04:21.214 real 0m0.106s 00:04:21.214 user 0m0.045s 00:04:21.214 sys 0m0.059s 00:04:21.214 10:49:27 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:21.214 10:49:27 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:21.214 ************************************ 00:04:21.214 END TEST env_pci 00:04:21.214 ************************************ 00:04:21.214 10:49:27 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:21.214 10:49:27 env -- env/env.sh@15 -- # uname 00:04:21.214 10:49:27 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:21.214 10:49:27 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:21.214 10:49:27 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:21.214 10:49:27 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:04:21.214 10:49:27 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:21.214 10:49:27 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.214 ************************************ 00:04:21.214 START TEST env_dpdk_post_init 00:04:21.214 ************************************ 00:04:21.214 10:49:27 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:21.214 EAL: Detected CPU lcores: 10 00:04:21.214 EAL: Detected NUMA nodes: 1 00:04:21.214 EAL: Detected shared linkage of DPDK 00:04:21.214 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:21.214 EAL: Selected IOVA mode 'PA' 00:04:21.214 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:21.214 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:21.214 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:21.214 Starting DPDK initialization... 00:04:21.214 Starting SPDK post initialization... 00:04:21.214 SPDK NVMe probe 00:04:21.214 Attaching to 0000:00:10.0 00:04:21.214 Attaching to 0000:00:11.0 00:04:21.214 Attached to 0000:00:10.0 00:04:21.214 Attached to 0000:00:11.0 00:04:21.214 Cleaning up... 00:04:21.214 00:04:21.214 real 0m0.284s 00:04:21.214 user 0m0.091s 00:04:21.214 sys 0m0.094s 00:04:21.214 10:49:28 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:21.214 10:49:28 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:21.214 ************************************ 00:04:21.214 END TEST env_dpdk_post_init 00:04:21.214 ************************************ 00:04:21.472 10:49:28 env -- env/env.sh@26 -- # uname 00:04:21.472 10:49:28 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:21.472 10:49:28 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:21.472 10:49:28 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:21.472 10:49:28 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:21.472 10:49:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.472 ************************************ 00:04:21.472 START TEST env_mem_callbacks 00:04:21.472 ************************************ 00:04:21.472 10:49:28 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:21.472 EAL: Detected CPU lcores: 10 00:04:21.472 EAL: Detected NUMA nodes: 1 00:04:21.472 EAL: Detected shared linkage of DPDK 00:04:21.472 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:21.472 EAL: Selected IOVA mode 'PA' 00:04:21.472 00:04:21.472 00:04:21.472 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.472 http://cunit.sourceforge.net/ 00:04:21.472 00:04:21.473 00:04:21.473 Suite: memoryTELEMETRY: No legacy callbacks, legacy socket not created 00:04:21.473 00:04:21.473 Test: test ... 00:04:21.473 register 0x200000200000 2097152 00:04:21.473 malloc 3145728 00:04:21.473 register 0x200000400000 4194304 00:04:21.473 buf 0x2000004fffc0 len 3145728 PASSED 00:04:21.473 malloc 64 00:04:21.473 buf 0x2000004ffec0 len 64 PASSED 00:04:21.473 malloc 4194304 00:04:21.473 register 0x200000800000 6291456 00:04:21.473 buf 0x2000009fffc0 len 4194304 PASSED 00:04:21.473 free 0x2000004fffc0 3145728 00:04:21.473 free 0x2000004ffec0 64 00:04:21.473 unregister 0x200000400000 4194304 PASSED 00:04:21.473 free 0x2000009fffc0 4194304 00:04:21.473 unregister 0x200000800000 6291456 PASSED 00:04:21.473 malloc 8388608 00:04:21.473 register 0x200000400000 10485760 00:04:21.731 buf 0x2000005fffc0 len 8388608 PASSED 00:04:21.731 free 0x2000005fffc0 8388608 00:04:21.731 unregister 0x200000400000 10485760 PASSED 00:04:21.731 passed 00:04:21.731 00:04:21.731 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.731 suites 1 1 n/a 0 0 00:04:21.731 tests 1 1 1 0 0 00:04:21.731 asserts 15 15 15 0 n/a 00:04:21.731 00:04:21.731 Elapsed time = 0.079 seconds 00:04:21.731 00:04:21.731 real 0m0.275s 00:04:21.731 user 0m0.107s 00:04:21.731 sys 0m0.066s 00:04:21.731 10:49:28 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:21.731 10:49:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:21.731 ************************************ 00:04:21.731 END TEST env_mem_callbacks 00:04:21.731 ************************************ 00:04:21.731 00:04:21.731 real 0m11.340s 00:04:21.731 user 0m9.510s 00:04:21.731 sys 0m1.450s 00:04:21.731 10:49:28 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:21.731 10:49:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.731 ************************************ 00:04:21.731 END TEST env 00:04:21.731 ************************************ 00:04:21.731 10:49:28 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:21.731 10:49:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:21.731 10:49:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:21.731 10:49:28 -- common/autotest_common.sh@10 -- # set +x 00:04:21.731 ************************************ 00:04:21.731 START TEST rpc 00:04:21.731 ************************************ 00:04:21.731 10:49:28 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:21.731 * Looking for test storage... 00:04:21.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:21.989 10:49:28 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:21.989 10:49:28 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:21.989 10:49:28 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:21.989 10:49:28 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:21.989 10:49:28 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.989 10:49:28 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.989 10:49:28 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.989 10:49:28 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.989 10:49:28 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.989 10:49:28 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.989 10:49:28 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.989 10:49:28 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.989 10:49:28 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.989 10:49:28 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.989 10:49:28 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.989 10:49:28 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:21.989 10:49:28 rpc -- scripts/common.sh@345 -- # : 1 00:04:21.989 10:49:28 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.989 10:49:28 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.989 10:49:28 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:21.989 10:49:28 rpc -- scripts/common.sh@353 -- # local d=1 00:04:21.989 10:49:28 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.989 10:49:28 rpc -- scripts/common.sh@355 -- # echo 1 00:04:21.989 10:49:28 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.989 10:49:28 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:21.989 10:49:28 rpc -- scripts/common.sh@353 -- # local d=2 00:04:21.989 10:49:28 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.989 10:49:28 rpc -- scripts/common.sh@355 -- # echo 2 00:04:21.989 10:49:28 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.989 10:49:28 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.989 10:49:28 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.989 10:49:28 rpc -- scripts/common.sh@368 -- # return 0 00:04:21.989 10:49:28 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.989 10:49:28 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:21.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.989 --rc genhtml_branch_coverage=1 00:04:21.989 --rc genhtml_function_coverage=1 00:04:21.989 --rc genhtml_legend=1 00:04:21.989 --rc geninfo_all_blocks=1 00:04:21.989 --rc geninfo_unexecuted_blocks=1 00:04:21.989 00:04:21.989 ' 00:04:21.989 10:49:28 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:21.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.989 --rc genhtml_branch_coverage=1 00:04:21.989 --rc genhtml_function_coverage=1 00:04:21.989 --rc genhtml_legend=1 00:04:21.989 --rc geninfo_all_blocks=1 00:04:21.989 --rc geninfo_unexecuted_blocks=1 00:04:21.989 00:04:21.989 ' 00:04:21.989 10:49:28 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:21.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.989 --rc genhtml_branch_coverage=1 00:04:21.989 --rc genhtml_function_coverage=1 00:04:21.989 --rc genhtml_legend=1 00:04:21.989 --rc geninfo_all_blocks=1 00:04:21.989 --rc geninfo_unexecuted_blocks=1 00:04:21.989 00:04:21.989 ' 00:04:21.989 10:49:28 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:21.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.989 --rc genhtml_branch_coverage=1 00:04:21.989 --rc genhtml_function_coverage=1 00:04:21.989 --rc genhtml_legend=1 00:04:21.989 --rc geninfo_all_blocks=1 00:04:21.989 --rc geninfo_unexecuted_blocks=1 00:04:21.989 00:04:21.989 ' 00:04:21.989 10:49:28 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57002 00:04:21.989 10:49:28 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:21.989 10:49:28 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.989 10:49:28 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57002 00:04:21.989 10:49:28 rpc -- common/autotest_common.sh@833 -- # '[' -z 57002 ']' 00:04:21.989 10:49:28 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.989 10:49:28 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:21.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.989 10:49:28 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.989 10:49:28 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:21.989 10:49:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.989 [2024-11-15 10:49:28.864084] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:04:21.989 [2024-11-15 10:49:28.864213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57002 ] 00:04:22.248 [2024-11-15 10:49:29.039881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.248 [2024-11-15 10:49:29.167697] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:22.248 [2024-11-15 10:49:29.167781] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57002' to capture a snapshot of events at runtime. 00:04:22.248 [2024-11-15 10:49:29.167795] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:22.248 [2024-11-15 10:49:29.167806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:22.248 [2024-11-15 10:49:29.167815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57002 for offline analysis/debug. 00:04:22.248 [2024-11-15 10:49:29.169313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.184 10:49:30 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:23.184 10:49:30 rpc -- common/autotest_common.sh@866 -- # return 0 00:04:23.184 10:49:30 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:23.184 10:49:30 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:23.184 10:49:30 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:23.184 10:49:30 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:23.184 10:49:30 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:23.184 10:49:30 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:23.184 10:49:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.184 ************************************ 00:04:23.184 START TEST rpc_integrity 00:04:23.184 ************************************ 00:04:23.184 10:49:30 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:23.184 10:49:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:23.184 10:49:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.184 10:49:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.443 10:49:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.443 10:49:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:23.443 10:49:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:23.443 10:49:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:23.443 10:49:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:23.443 10:49:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.443 10:49:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.443 10:49:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.443 10:49:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:23.443 10:49:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:23.443 10:49:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.443 10:49:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.443 10:49:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.443 10:49:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:23.443 { 00:04:23.443 "name": "Malloc0", 00:04:23.443 "aliases": [ 00:04:23.443 "6d0efd4a-8101-4bfe-87fc-c6bfcc1a5c74" 00:04:23.443 ], 00:04:23.443 "product_name": "Malloc disk", 00:04:23.443 "block_size": 512, 00:04:23.443 "num_blocks": 16384, 00:04:23.443 "uuid": "6d0efd4a-8101-4bfe-87fc-c6bfcc1a5c74", 00:04:23.443 "assigned_rate_limits": { 00:04:23.443 "rw_ios_per_sec": 0, 00:04:23.443 "rw_mbytes_per_sec": 0, 00:04:23.443 "r_mbytes_per_sec": 0, 00:04:23.443 "w_mbytes_per_sec": 0 00:04:23.443 }, 00:04:23.443 "claimed": false, 00:04:23.443 "zoned": false, 00:04:23.443 "supported_io_types": { 00:04:23.443 "read": true, 00:04:23.443 "write": true, 00:04:23.443 "unmap": true, 00:04:23.443 "flush": true, 00:04:23.443 "reset": true, 00:04:23.443 "nvme_admin": false, 00:04:23.443 "nvme_io": false, 00:04:23.443 "nvme_io_md": false, 00:04:23.443 "write_zeroes": true, 00:04:23.443 "zcopy": true, 00:04:23.443 "get_zone_info": false, 00:04:23.443 "zone_management": false, 00:04:23.443 "zone_append": false, 00:04:23.443 "compare": false, 00:04:23.443 "compare_and_write": false, 00:04:23.443 "abort": true, 00:04:23.443 "seek_hole": false, 00:04:23.443 "seek_data": false, 00:04:23.443 "copy": true, 00:04:23.443 "nvme_iov_md": false 00:04:23.443 }, 00:04:23.443 "memory_domains": [ 00:04:23.443 { 00:04:23.443 "dma_device_id": "system", 00:04:23.443 "dma_device_type": 1 00:04:23.443 }, 00:04:23.443 { 00:04:23.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.443 "dma_device_type": 2 00:04:23.443 } 00:04:23.443 ], 00:04:23.443 "driver_specific": {} 00:04:23.443 } 00:04:23.443 ]' 00:04:23.443 10:49:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:23.443 10:49:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:23.443 10:49:30 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:23.443 10:49:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.444 10:49:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.444 [2024-11-15 10:49:30.290617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:23.444 [2024-11-15 10:49:30.290719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:23.444 [2024-11-15 10:49:30.290751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:23.444 [2024-11-15 10:49:30.290769] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:23.444 [2024-11-15 10:49:30.293490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:23.444 [2024-11-15 10:49:30.293543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:23.444 Passthru0 00:04:23.444 10:49:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.444 10:49:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:23.444 10:49:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.444 10:49:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.444 10:49:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.444 10:49:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:23.444 { 00:04:23.444 "name": "Malloc0", 00:04:23.444 "aliases": [ 00:04:23.444 "6d0efd4a-8101-4bfe-87fc-c6bfcc1a5c74" 00:04:23.444 ], 00:04:23.444 "product_name": "Malloc disk", 00:04:23.444 "block_size": 512, 00:04:23.444 "num_blocks": 16384, 00:04:23.444 "uuid": "6d0efd4a-8101-4bfe-87fc-c6bfcc1a5c74", 00:04:23.444 "assigned_rate_limits": { 00:04:23.444 "rw_ios_per_sec": 0, 00:04:23.444 "rw_mbytes_per_sec": 0, 00:04:23.444 "r_mbytes_per_sec": 0, 00:04:23.444 "w_mbytes_per_sec": 0 00:04:23.444 }, 00:04:23.444 "claimed": true, 00:04:23.444 "claim_type": "exclusive_write", 00:04:23.444 "zoned": false, 00:04:23.444 "supported_io_types": { 00:04:23.444 "read": true, 00:04:23.444 "write": true, 00:04:23.444 "unmap": true, 00:04:23.444 "flush": true, 00:04:23.444 "reset": true, 00:04:23.444 "nvme_admin": false, 00:04:23.444 "nvme_io": false, 00:04:23.444 "nvme_io_md": false, 00:04:23.444 "write_zeroes": true, 00:04:23.444 "zcopy": true, 00:04:23.444 "get_zone_info": false, 00:04:23.444 "zone_management": false, 00:04:23.444 "zone_append": false, 00:04:23.444 "compare": false, 00:04:23.444 "compare_and_write": false, 00:04:23.444 "abort": true, 00:04:23.444 "seek_hole": false, 00:04:23.444 "seek_data": false, 00:04:23.444 "copy": true, 00:04:23.444 "nvme_iov_md": false 00:04:23.444 }, 00:04:23.444 "memory_domains": [ 00:04:23.444 { 00:04:23.444 "dma_device_id": "system", 00:04:23.444 "dma_device_type": 1 00:04:23.444 }, 00:04:23.444 { 00:04:23.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.444 "dma_device_type": 2 00:04:23.444 } 00:04:23.444 ], 00:04:23.444 "driver_specific": {} 00:04:23.444 }, 00:04:23.444 { 00:04:23.444 "name": "Passthru0", 00:04:23.444 "aliases": [ 00:04:23.444 "6c225635-da3d-5427-8433-54cd8e2481b7" 00:04:23.444 ], 00:04:23.444 "product_name": "passthru", 00:04:23.444 "block_size": 512, 00:04:23.444 "num_blocks": 16384, 00:04:23.444 "uuid": "6c225635-da3d-5427-8433-54cd8e2481b7", 00:04:23.444 "assigned_rate_limits": { 00:04:23.444 "rw_ios_per_sec": 0, 00:04:23.444 "rw_mbytes_per_sec": 0, 00:04:23.444 "r_mbytes_per_sec": 0, 00:04:23.444 "w_mbytes_per_sec": 0 00:04:23.444 }, 00:04:23.444 "claimed": false, 00:04:23.444 "zoned": false, 00:04:23.444 "supported_io_types": { 00:04:23.444 "read": true, 00:04:23.444 "write": true, 00:04:23.444 "unmap": true, 00:04:23.444 "flush": true, 00:04:23.444 "reset": true, 00:04:23.444 "nvme_admin": false, 00:04:23.444 "nvme_io": false, 00:04:23.444 "nvme_io_md": false, 00:04:23.444 "write_zeroes": true, 00:04:23.444 "zcopy": true, 00:04:23.444 "get_zone_info": false, 00:04:23.444 "zone_management": false, 00:04:23.444 "zone_append": false, 00:04:23.444 "compare": false, 00:04:23.444 "compare_and_write": false, 00:04:23.444 "abort": true, 00:04:23.444 "seek_hole": false, 00:04:23.444 "seek_data": false, 00:04:23.444 "copy": true, 00:04:23.444 "nvme_iov_md": false 00:04:23.444 }, 00:04:23.444 "memory_domains": [ 00:04:23.444 { 00:04:23.444 "dma_device_id": "system", 00:04:23.444 "dma_device_type": 1 00:04:23.444 }, 00:04:23.444 { 00:04:23.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.444 "dma_device_type": 2 00:04:23.444 } 00:04:23.444 ], 00:04:23.444 "driver_specific": { 00:04:23.444 "passthru": { 00:04:23.444 "name": "Passthru0", 00:04:23.444 "base_bdev_name": "Malloc0" 00:04:23.444 } 00:04:23.444 } 00:04:23.444 } 00:04:23.444 ]' 00:04:23.444 10:49:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:23.444 10:49:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:23.444 10:49:30 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:23.444 10:49:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.444 10:49:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.704 10:49:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.704 10:49:30 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:23.704 10:49:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.704 10:49:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.704 10:49:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.704 10:49:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:23.704 10:49:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.704 10:49:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.704 10:49:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.704 10:49:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:23.704 10:49:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:23.704 10:49:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:23.704 00:04:23.704 real 0m0.379s 00:04:23.704 user 0m0.199s 00:04:23.704 sys 0m0.065s 00:04:23.704 10:49:30 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:23.704 10:49:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.704 ************************************ 00:04:23.704 END TEST rpc_integrity 00:04:23.704 ************************************ 00:04:23.704 10:49:30 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:23.704 10:49:30 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:23.704 10:49:30 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:23.704 10:49:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.704 ************************************ 00:04:23.704 START TEST rpc_plugins 00:04:23.704 ************************************ 00:04:23.704 10:49:30 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:04:23.704 10:49:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:23.704 10:49:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.704 10:49:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.704 10:49:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.704 10:49:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:23.704 10:49:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:23.704 10:49:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.704 10:49:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.704 10:49:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.704 10:49:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:23.704 { 00:04:23.704 "name": "Malloc1", 00:04:23.704 "aliases": [ 00:04:23.704 "9cedfb53-9e67-4b4d-9955-c49381586a2a" 00:04:23.704 ], 00:04:23.704 "product_name": "Malloc disk", 00:04:23.704 "block_size": 4096, 00:04:23.704 "num_blocks": 256, 00:04:23.704 "uuid": "9cedfb53-9e67-4b4d-9955-c49381586a2a", 00:04:23.704 "assigned_rate_limits": { 00:04:23.704 "rw_ios_per_sec": 0, 00:04:23.704 "rw_mbytes_per_sec": 0, 00:04:23.704 "r_mbytes_per_sec": 0, 00:04:23.704 "w_mbytes_per_sec": 0 00:04:23.704 }, 00:04:23.704 "claimed": false, 00:04:23.704 "zoned": false, 00:04:23.704 "supported_io_types": { 00:04:23.704 "read": true, 00:04:23.704 "write": true, 00:04:23.704 "unmap": true, 00:04:23.704 "flush": true, 00:04:23.704 "reset": true, 00:04:23.704 "nvme_admin": false, 00:04:23.704 "nvme_io": false, 00:04:23.704 "nvme_io_md": false, 00:04:23.704 "write_zeroes": true, 00:04:23.704 "zcopy": true, 00:04:23.704 "get_zone_info": false, 00:04:23.704 "zone_management": false, 00:04:23.704 "zone_append": false, 00:04:23.704 "compare": false, 00:04:23.704 "compare_and_write": false, 00:04:23.704 "abort": true, 00:04:23.704 "seek_hole": false, 00:04:23.704 "seek_data": false, 00:04:23.704 "copy": true, 00:04:23.704 "nvme_iov_md": false 00:04:23.704 }, 00:04:23.704 "memory_domains": [ 00:04:23.704 { 00:04:23.704 "dma_device_id": "system", 00:04:23.704 "dma_device_type": 1 00:04:23.704 }, 00:04:23.704 { 00:04:23.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.704 "dma_device_type": 2 00:04:23.704 } 00:04:23.704 ], 00:04:23.704 "driver_specific": {} 00:04:23.704 } 00:04:23.704 ]' 00:04:23.704 10:49:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:23.964 10:49:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:23.964 10:49:30 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:23.965 10:49:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.965 10:49:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.965 10:49:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.965 10:49:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:23.965 10:49:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.965 10:49:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.965 10:49:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.965 10:49:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:23.965 10:49:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:23.965 10:49:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:23.965 00:04:23.965 real 0m0.171s 00:04:23.965 user 0m0.096s 00:04:23.965 sys 0m0.028s 00:04:23.965 10:49:30 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:23.965 10:49:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.965 ************************************ 00:04:23.965 END TEST rpc_plugins 00:04:23.965 ************************************ 00:04:23.965 10:49:30 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:23.965 10:49:30 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:23.965 10:49:30 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:23.965 10:49:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.965 ************************************ 00:04:23.965 START TEST rpc_trace_cmd_test 00:04:23.965 ************************************ 00:04:23.965 10:49:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:04:23.965 10:49:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:23.965 10:49:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:23.965 10:49:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.965 10:49:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:23.965 10:49:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.965 10:49:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:23.965 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57002", 00:04:23.965 "tpoint_group_mask": "0x8", 00:04:23.965 "iscsi_conn": { 00:04:23.965 "mask": "0x2", 00:04:23.965 "tpoint_mask": "0x0" 00:04:23.965 }, 00:04:23.965 "scsi": { 00:04:23.965 "mask": "0x4", 00:04:23.965 "tpoint_mask": "0x0" 00:04:23.965 }, 00:04:23.965 "bdev": { 00:04:23.965 "mask": "0x8", 00:04:23.965 "tpoint_mask": "0xffffffffffffffff" 00:04:23.965 }, 00:04:23.965 "nvmf_rdma": { 00:04:23.965 "mask": "0x10", 00:04:23.965 "tpoint_mask": "0x0" 00:04:23.965 }, 00:04:23.965 "nvmf_tcp": { 00:04:23.965 "mask": "0x20", 00:04:23.965 "tpoint_mask": "0x0" 00:04:23.965 }, 00:04:23.965 "ftl": { 00:04:23.965 "mask": "0x40", 00:04:23.965 "tpoint_mask": "0x0" 00:04:23.965 }, 00:04:23.965 "blobfs": { 00:04:23.965 "mask": "0x80", 00:04:23.965 "tpoint_mask": "0x0" 00:04:23.965 }, 00:04:23.965 "dsa": { 00:04:23.965 "mask": "0x200", 00:04:23.965 "tpoint_mask": "0x0" 00:04:23.965 }, 00:04:23.965 "thread": { 00:04:23.965 "mask": "0x400", 00:04:23.965 "tpoint_mask": "0x0" 00:04:23.965 }, 00:04:23.965 "nvme_pcie": { 00:04:23.965 "mask": "0x800", 00:04:23.965 "tpoint_mask": "0x0" 00:04:23.965 }, 00:04:23.965 "iaa": { 00:04:23.965 "mask": "0x1000", 00:04:23.965 "tpoint_mask": "0x0" 00:04:23.965 }, 00:04:23.965 "nvme_tcp": { 00:04:23.965 "mask": "0x2000", 00:04:23.965 "tpoint_mask": "0x0" 00:04:23.965 }, 00:04:23.965 "bdev_nvme": { 00:04:23.965 "mask": "0x4000", 00:04:23.965 "tpoint_mask": "0x0" 00:04:23.965 }, 00:04:23.965 "sock": { 00:04:23.965 "mask": "0x8000", 00:04:23.965 "tpoint_mask": "0x0" 00:04:23.965 }, 00:04:23.965 "blob": { 00:04:23.965 "mask": "0x10000", 00:04:23.965 "tpoint_mask": "0x0" 00:04:23.965 }, 00:04:23.965 "bdev_raid": { 00:04:23.965 "mask": "0x20000", 00:04:23.965 "tpoint_mask": "0x0" 00:04:23.965 }, 00:04:23.965 "scheduler": { 00:04:23.965 "mask": "0x40000", 00:04:23.965 "tpoint_mask": "0x0" 00:04:23.965 } 00:04:23.965 }' 00:04:23.965 10:49:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:23.965 10:49:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:23.965 10:49:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:24.225 10:49:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:24.225 10:49:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:24.225 10:49:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:24.225 10:49:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:24.225 10:49:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:24.225 10:49:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:24.225 10:49:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:24.225 00:04:24.225 real 0m0.248s 00:04:24.225 user 0m0.198s 00:04:24.225 sys 0m0.039s 00:04:24.225 10:49:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:24.225 10:49:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:24.225 ************************************ 00:04:24.225 END TEST rpc_trace_cmd_test 00:04:24.225 ************************************ 00:04:24.225 10:49:31 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:24.225 10:49:31 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:24.225 10:49:31 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:24.225 10:49:31 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:24.225 10:49:31 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:24.225 10:49:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.225 ************************************ 00:04:24.225 START TEST rpc_daemon_integrity 00:04:24.225 ************************************ 00:04:24.225 10:49:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:24.225 10:49:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:24.225 10:49:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.225 10:49:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.225 10:49:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.225 10:49:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:24.225 10:49:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:24.485 10:49:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:24.485 10:49:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:24.485 10:49:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.485 10:49:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.485 10:49:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.485 10:49:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:24.485 10:49:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:24.485 10:49:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.485 10:49:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.485 10:49:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.485 10:49:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:24.485 { 00:04:24.485 "name": "Malloc2", 00:04:24.485 "aliases": [ 00:04:24.485 "a0c06e84-e4ac-4cdd-b2de-ba70d83d0235" 00:04:24.485 ], 00:04:24.485 "product_name": "Malloc disk", 00:04:24.485 "block_size": 512, 00:04:24.485 "num_blocks": 16384, 00:04:24.485 "uuid": "a0c06e84-e4ac-4cdd-b2de-ba70d83d0235", 00:04:24.485 "assigned_rate_limits": { 00:04:24.485 "rw_ios_per_sec": 0, 00:04:24.485 "rw_mbytes_per_sec": 0, 00:04:24.485 "r_mbytes_per_sec": 0, 00:04:24.485 "w_mbytes_per_sec": 0 00:04:24.485 }, 00:04:24.485 "claimed": false, 00:04:24.485 "zoned": false, 00:04:24.485 "supported_io_types": { 00:04:24.485 "read": true, 00:04:24.485 "write": true, 00:04:24.485 "unmap": true, 00:04:24.485 "flush": true, 00:04:24.485 "reset": true, 00:04:24.485 "nvme_admin": false, 00:04:24.485 "nvme_io": false, 00:04:24.485 "nvme_io_md": false, 00:04:24.485 "write_zeroes": true, 00:04:24.485 "zcopy": true, 00:04:24.485 "get_zone_info": false, 00:04:24.485 "zone_management": false, 00:04:24.485 "zone_append": false, 00:04:24.485 "compare": false, 00:04:24.485 "compare_and_write": false, 00:04:24.485 "abort": true, 00:04:24.485 "seek_hole": false, 00:04:24.485 "seek_data": false, 00:04:24.485 "copy": true, 00:04:24.485 "nvme_iov_md": false 00:04:24.485 }, 00:04:24.485 "memory_domains": [ 00:04:24.485 { 00:04:24.485 "dma_device_id": "system", 00:04:24.485 "dma_device_type": 1 00:04:24.485 }, 00:04:24.485 { 00:04:24.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.485 "dma_device_type": 2 00:04:24.485 } 00:04:24.485 ], 00:04:24.485 "driver_specific": {} 00:04:24.485 } 00:04:24.485 ]' 00:04:24.485 10:49:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:24.485 10:49:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:24.485 10:49:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:24.485 10:49:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.485 10:49:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.485 [2024-11-15 10:49:31.257044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:24.485 [2024-11-15 10:49:31.257122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:24.485 [2024-11-15 10:49:31.257150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:24.485 [2024-11-15 10:49:31.257163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:24.485 [2024-11-15 10:49:31.259805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:24.485 [2024-11-15 10:49:31.259855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:24.485 Passthru0 00:04:24.485 10:49:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.485 10:49:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:24.485 10:49:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.485 10:49:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.485 10:49:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.485 10:49:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:24.485 { 00:04:24.485 "name": "Malloc2", 00:04:24.485 "aliases": [ 00:04:24.485 "a0c06e84-e4ac-4cdd-b2de-ba70d83d0235" 00:04:24.485 ], 00:04:24.485 "product_name": "Malloc disk", 00:04:24.485 "block_size": 512, 00:04:24.485 "num_blocks": 16384, 00:04:24.485 "uuid": "a0c06e84-e4ac-4cdd-b2de-ba70d83d0235", 00:04:24.485 "assigned_rate_limits": { 00:04:24.485 "rw_ios_per_sec": 0, 00:04:24.485 "rw_mbytes_per_sec": 0, 00:04:24.485 "r_mbytes_per_sec": 0, 00:04:24.485 "w_mbytes_per_sec": 0 00:04:24.485 }, 00:04:24.485 "claimed": true, 00:04:24.485 "claim_type": "exclusive_write", 00:04:24.485 "zoned": false, 00:04:24.485 "supported_io_types": { 00:04:24.485 "read": true, 00:04:24.485 "write": true, 00:04:24.485 "unmap": true, 00:04:24.485 "flush": true, 00:04:24.485 "reset": true, 00:04:24.485 "nvme_admin": false, 00:04:24.485 "nvme_io": false, 00:04:24.485 "nvme_io_md": false, 00:04:24.485 "write_zeroes": true, 00:04:24.485 "zcopy": true, 00:04:24.485 "get_zone_info": false, 00:04:24.485 "zone_management": false, 00:04:24.485 "zone_append": false, 00:04:24.485 "compare": false, 00:04:24.485 "compare_and_write": false, 00:04:24.485 "abort": true, 00:04:24.485 "seek_hole": false, 00:04:24.485 "seek_data": false, 00:04:24.485 "copy": true, 00:04:24.485 "nvme_iov_md": false 00:04:24.485 }, 00:04:24.485 "memory_domains": [ 00:04:24.485 { 00:04:24.485 "dma_device_id": "system", 00:04:24.485 "dma_device_type": 1 00:04:24.485 }, 00:04:24.485 { 00:04:24.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.485 "dma_device_type": 2 00:04:24.485 } 00:04:24.485 ], 00:04:24.485 "driver_specific": {} 00:04:24.485 }, 00:04:24.485 { 00:04:24.485 "name": "Passthru0", 00:04:24.485 "aliases": [ 00:04:24.485 "f8660fac-30b3-51dd-bd8f-92de27a55819" 00:04:24.485 ], 00:04:24.485 "product_name": "passthru", 00:04:24.485 "block_size": 512, 00:04:24.485 "num_blocks": 16384, 00:04:24.485 "uuid": "f8660fac-30b3-51dd-bd8f-92de27a55819", 00:04:24.485 "assigned_rate_limits": { 00:04:24.485 "rw_ios_per_sec": 0, 00:04:24.486 "rw_mbytes_per_sec": 0, 00:04:24.486 "r_mbytes_per_sec": 0, 00:04:24.486 "w_mbytes_per_sec": 0 00:04:24.486 }, 00:04:24.486 "claimed": false, 00:04:24.486 "zoned": false, 00:04:24.486 "supported_io_types": { 00:04:24.486 "read": true, 00:04:24.486 "write": true, 00:04:24.486 "unmap": true, 00:04:24.486 "flush": true, 00:04:24.486 "reset": true, 00:04:24.486 "nvme_admin": false, 00:04:24.486 "nvme_io": false, 00:04:24.486 "nvme_io_md": false, 00:04:24.486 "write_zeroes": true, 00:04:24.486 "zcopy": true, 00:04:24.486 "get_zone_info": false, 00:04:24.486 "zone_management": false, 00:04:24.486 "zone_append": false, 00:04:24.486 "compare": false, 00:04:24.486 "compare_and_write": false, 00:04:24.486 "abort": true, 00:04:24.486 "seek_hole": false, 00:04:24.486 "seek_data": false, 00:04:24.486 "copy": true, 00:04:24.486 "nvme_iov_md": false 00:04:24.486 }, 00:04:24.486 "memory_domains": [ 00:04:24.486 { 00:04:24.486 "dma_device_id": "system", 00:04:24.486 "dma_device_type": 1 00:04:24.486 }, 00:04:24.486 { 00:04:24.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.486 "dma_device_type": 2 00:04:24.486 } 00:04:24.486 ], 00:04:24.486 "driver_specific": { 00:04:24.486 "passthru": { 00:04:24.486 "name": "Passthru0", 00:04:24.486 "base_bdev_name": "Malloc2" 00:04:24.486 } 00:04:24.486 } 00:04:24.486 } 00:04:24.486 ]' 00:04:24.486 10:49:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:24.486 10:49:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:24.486 10:49:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:24.486 10:49:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.486 10:49:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.486 10:49:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.486 10:49:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:24.486 10:49:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.486 10:49:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.486 10:49:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.486 10:49:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:24.486 10:49:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.486 10:49:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.746 10:49:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.746 10:49:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:24.746 10:49:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:24.746 10:49:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:24.746 00:04:24.746 real 0m0.372s 00:04:24.746 user 0m0.195s 00:04:24.746 sys 0m0.058s 00:04:24.746 10:49:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:24.746 10:49:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.746 ************************************ 00:04:24.746 END TEST rpc_daemon_integrity 00:04:24.746 ************************************ 00:04:24.746 10:49:31 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:24.746 10:49:31 rpc -- rpc/rpc.sh@84 -- # killprocess 57002 00:04:24.746 10:49:31 rpc -- common/autotest_common.sh@952 -- # '[' -z 57002 ']' 00:04:24.746 10:49:31 rpc -- common/autotest_common.sh@956 -- # kill -0 57002 00:04:24.746 10:49:31 rpc -- common/autotest_common.sh@957 -- # uname 00:04:24.746 10:49:31 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:24.746 10:49:31 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57002 00:04:24.746 10:49:31 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:24.746 10:49:31 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:24.746 10:49:31 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57002' 00:04:24.746 killing process with pid 57002 00:04:24.746 10:49:31 rpc -- common/autotest_common.sh@971 -- # kill 57002 00:04:24.746 10:49:31 rpc -- common/autotest_common.sh@976 -- # wait 57002 00:04:27.283 00:04:27.283 real 0m5.467s 00:04:27.283 user 0m6.067s 00:04:27.283 sys 0m0.939s 00:04:27.283 10:49:34 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:27.283 10:49:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.283 ************************************ 00:04:27.283 END TEST rpc 00:04:27.283 ************************************ 00:04:27.283 10:49:34 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:27.283 10:49:34 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:27.283 10:49:34 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:27.283 10:49:34 -- common/autotest_common.sh@10 -- # set +x 00:04:27.283 ************************************ 00:04:27.283 START TEST skip_rpc 00:04:27.283 ************************************ 00:04:27.283 10:49:34 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:27.283 * Looking for test storage... 00:04:27.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:27.283 10:49:34 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:27.283 10:49:34 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:27.283 10:49:34 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:27.542 10:49:34 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:27.542 10:49:34 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.542 10:49:34 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.542 10:49:34 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.542 10:49:34 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.542 10:49:34 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.542 10:49:34 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.542 10:49:34 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.542 10:49:34 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.542 10:49:34 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.542 10:49:34 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.542 10:49:34 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.542 10:49:34 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:27.542 10:49:34 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:27.542 10:49:34 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.542 10:49:34 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.542 10:49:34 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:27.542 10:49:34 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:27.542 10:49:34 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.542 10:49:34 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:27.542 10:49:34 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.543 10:49:34 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:27.543 10:49:34 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:27.543 10:49:34 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.543 10:49:34 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:27.543 10:49:34 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.543 10:49:34 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.543 10:49:34 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.543 10:49:34 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:27.543 10:49:34 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.543 10:49:34 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:27.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.543 --rc genhtml_branch_coverage=1 00:04:27.543 --rc genhtml_function_coverage=1 00:04:27.543 --rc genhtml_legend=1 00:04:27.543 --rc geninfo_all_blocks=1 00:04:27.543 --rc geninfo_unexecuted_blocks=1 00:04:27.543 00:04:27.543 ' 00:04:27.543 10:49:34 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:27.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.543 --rc genhtml_branch_coverage=1 00:04:27.543 --rc genhtml_function_coverage=1 00:04:27.543 --rc genhtml_legend=1 00:04:27.543 --rc geninfo_all_blocks=1 00:04:27.543 --rc geninfo_unexecuted_blocks=1 00:04:27.543 00:04:27.543 ' 00:04:27.543 10:49:34 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:27.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.543 --rc genhtml_branch_coverage=1 00:04:27.543 --rc genhtml_function_coverage=1 00:04:27.543 --rc genhtml_legend=1 00:04:27.543 --rc geninfo_all_blocks=1 00:04:27.543 --rc geninfo_unexecuted_blocks=1 00:04:27.543 00:04:27.543 ' 00:04:27.543 10:49:34 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:27.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.543 --rc genhtml_branch_coverage=1 00:04:27.543 --rc genhtml_function_coverage=1 00:04:27.543 --rc genhtml_legend=1 00:04:27.543 --rc geninfo_all_blocks=1 00:04:27.543 --rc geninfo_unexecuted_blocks=1 00:04:27.543 00:04:27.543 ' 00:04:27.543 10:49:34 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:27.543 10:49:34 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:27.543 10:49:34 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:27.543 10:49:34 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:27.543 10:49:34 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:27.543 10:49:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.543 ************************************ 00:04:27.543 START TEST skip_rpc 00:04:27.543 ************************************ 00:04:27.543 10:49:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:04:27.543 10:49:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57231 00:04:27.543 10:49:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:27.543 10:49:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.543 10:49:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:27.543 [2024-11-15 10:49:34.414985] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:04:27.543 [2024-11-15 10:49:34.415108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57231 ] 00:04:27.802 [2024-11-15 10:49:34.579444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.802 [2024-11-15 10:49:34.698932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.067 10:49:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:33.067 10:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:33.067 10:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:33.067 10:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:33.067 10:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:33.067 10:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:33.067 10:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:33.067 10:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:33.067 10:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.067 10:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.067 10:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:33.067 10:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:33.067 10:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:33.067 10:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:33.067 10:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:33.067 10:49:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:33.067 10:49:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57231 00:04:33.067 10:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 57231 ']' 00:04:33.067 10:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 57231 00:04:33.067 10:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:04:33.067 10:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:33.067 10:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57231 00:04:33.067 10:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:33.068 10:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:33.068 killing process with pid 57231 00:04:33.068 10:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57231' 00:04:33.068 10:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 57231 00:04:33.068 10:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 57231 00:04:34.973 00:04:34.973 real 0m7.567s 00:04:34.973 user 0m7.108s 00:04:34.973 sys 0m0.379s 00:04:34.973 10:49:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:34.973 10:49:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.973 ************************************ 00:04:34.973 END TEST skip_rpc 00:04:34.973 ************************************ 00:04:35.232 10:49:41 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:35.232 10:49:41 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:35.232 10:49:41 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:35.232 10:49:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.232 ************************************ 00:04:35.232 START TEST skip_rpc_with_json 00:04:35.232 ************************************ 00:04:35.232 10:49:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:04:35.232 10:49:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:35.232 10:49:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57335 00:04:35.232 10:49:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:35.232 10:49:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:35.232 10:49:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57335 00:04:35.232 10:49:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 57335 ']' 00:04:35.232 10:49:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.233 10:49:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:35.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.233 10:49:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.233 10:49:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:35.233 10:49:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:35.233 [2024-11-15 10:49:42.095651] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:04:35.233 [2024-11-15 10:49:42.095801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57335 ] 00:04:35.493 [2024-11-15 10:49:42.271461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.493 [2024-11-15 10:49:42.389480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.430 10:49:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:36.430 10:49:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:04:36.430 10:49:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:36.430 10:49:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.430 10:49:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:36.430 [2024-11-15 10:49:43.296635] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:36.430 request: 00:04:36.430 { 00:04:36.430 "trtype": "tcp", 00:04:36.430 "method": "nvmf_get_transports", 00:04:36.430 "req_id": 1 00:04:36.430 } 00:04:36.430 Got JSON-RPC error response 00:04:36.430 response: 00:04:36.430 { 00:04:36.430 "code": -19, 00:04:36.430 "message": "No such device" 00:04:36.430 } 00:04:36.430 10:49:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:36.430 10:49:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:36.430 10:49:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.430 10:49:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:36.430 [2024-11-15 10:49:43.308744] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:36.430 10:49:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.430 10:49:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:36.430 10:49:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.430 10:49:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:36.690 10:49:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.690 10:49:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:36.690 { 00:04:36.690 "subsystems": [ 00:04:36.690 { 00:04:36.690 "subsystem": "fsdev", 00:04:36.690 "config": [ 00:04:36.690 { 00:04:36.690 "method": "fsdev_set_opts", 00:04:36.690 "params": { 00:04:36.690 "fsdev_io_pool_size": 65535, 00:04:36.690 "fsdev_io_cache_size": 256 00:04:36.690 } 00:04:36.690 } 00:04:36.690 ] 00:04:36.690 }, 00:04:36.690 { 00:04:36.690 "subsystem": "keyring", 00:04:36.690 "config": [] 00:04:36.690 }, 00:04:36.690 { 00:04:36.690 "subsystem": "iobuf", 00:04:36.690 "config": [ 00:04:36.690 { 00:04:36.690 "method": "iobuf_set_options", 00:04:36.690 "params": { 00:04:36.690 "small_pool_count": 8192, 00:04:36.690 "large_pool_count": 1024, 00:04:36.690 "small_bufsize": 8192, 00:04:36.690 "large_bufsize": 135168, 00:04:36.690 "enable_numa": false 00:04:36.690 } 00:04:36.690 } 00:04:36.690 ] 00:04:36.690 }, 00:04:36.690 { 00:04:36.690 "subsystem": "sock", 00:04:36.690 "config": [ 00:04:36.690 { 00:04:36.690 "method": "sock_set_default_impl", 00:04:36.690 "params": { 00:04:36.690 "impl_name": "posix" 00:04:36.690 } 00:04:36.690 }, 00:04:36.690 { 00:04:36.690 "method": "sock_impl_set_options", 00:04:36.690 "params": { 00:04:36.690 "impl_name": "ssl", 00:04:36.690 "recv_buf_size": 4096, 00:04:36.690 "send_buf_size": 4096, 00:04:36.690 "enable_recv_pipe": true, 00:04:36.690 "enable_quickack": false, 00:04:36.690 "enable_placement_id": 0, 00:04:36.690 "enable_zerocopy_send_server": true, 00:04:36.690 "enable_zerocopy_send_client": false, 00:04:36.690 "zerocopy_threshold": 0, 00:04:36.690 "tls_version": 0, 00:04:36.690 "enable_ktls": false 00:04:36.690 } 00:04:36.690 }, 00:04:36.690 { 00:04:36.690 "method": "sock_impl_set_options", 00:04:36.690 "params": { 00:04:36.690 "impl_name": "posix", 00:04:36.690 "recv_buf_size": 2097152, 00:04:36.690 "send_buf_size": 2097152, 00:04:36.690 "enable_recv_pipe": true, 00:04:36.690 "enable_quickack": false, 00:04:36.690 "enable_placement_id": 0, 00:04:36.690 "enable_zerocopy_send_server": true, 00:04:36.690 "enable_zerocopy_send_client": false, 00:04:36.690 "zerocopy_threshold": 0, 00:04:36.690 "tls_version": 0, 00:04:36.690 "enable_ktls": false 00:04:36.690 } 00:04:36.690 } 00:04:36.690 ] 00:04:36.690 }, 00:04:36.690 { 00:04:36.690 "subsystem": "vmd", 00:04:36.690 "config": [] 00:04:36.690 }, 00:04:36.690 { 00:04:36.690 "subsystem": "accel", 00:04:36.690 "config": [ 00:04:36.690 { 00:04:36.690 "method": "accel_set_options", 00:04:36.690 "params": { 00:04:36.690 "small_cache_size": 128, 00:04:36.690 "large_cache_size": 16, 00:04:36.690 "task_count": 2048, 00:04:36.690 "sequence_count": 2048, 00:04:36.690 "buf_count": 2048 00:04:36.690 } 00:04:36.690 } 00:04:36.690 ] 00:04:36.690 }, 00:04:36.690 { 00:04:36.690 "subsystem": "bdev", 00:04:36.690 "config": [ 00:04:36.690 { 00:04:36.690 "method": "bdev_set_options", 00:04:36.690 "params": { 00:04:36.690 "bdev_io_pool_size": 65535, 00:04:36.690 "bdev_io_cache_size": 256, 00:04:36.690 "bdev_auto_examine": true, 00:04:36.690 "iobuf_small_cache_size": 128, 00:04:36.690 "iobuf_large_cache_size": 16 00:04:36.690 } 00:04:36.690 }, 00:04:36.690 { 00:04:36.690 "method": "bdev_raid_set_options", 00:04:36.690 "params": { 00:04:36.690 "process_window_size_kb": 1024, 00:04:36.690 "process_max_bandwidth_mb_sec": 0 00:04:36.690 } 00:04:36.690 }, 00:04:36.690 { 00:04:36.690 "method": "bdev_iscsi_set_options", 00:04:36.690 "params": { 00:04:36.690 "timeout_sec": 30 00:04:36.690 } 00:04:36.690 }, 00:04:36.690 { 00:04:36.690 "method": "bdev_nvme_set_options", 00:04:36.690 "params": { 00:04:36.690 "action_on_timeout": "none", 00:04:36.690 "timeout_us": 0, 00:04:36.690 "timeout_admin_us": 0, 00:04:36.690 "keep_alive_timeout_ms": 10000, 00:04:36.690 "arbitration_burst": 0, 00:04:36.690 "low_priority_weight": 0, 00:04:36.690 "medium_priority_weight": 0, 00:04:36.690 "high_priority_weight": 0, 00:04:36.690 "nvme_adminq_poll_period_us": 10000, 00:04:36.690 "nvme_ioq_poll_period_us": 0, 00:04:36.690 "io_queue_requests": 0, 00:04:36.690 "delay_cmd_submit": true, 00:04:36.690 "transport_retry_count": 4, 00:04:36.690 "bdev_retry_count": 3, 00:04:36.690 "transport_ack_timeout": 0, 00:04:36.690 "ctrlr_loss_timeout_sec": 0, 00:04:36.690 "reconnect_delay_sec": 0, 00:04:36.690 "fast_io_fail_timeout_sec": 0, 00:04:36.690 "disable_auto_failback": false, 00:04:36.690 "generate_uuids": false, 00:04:36.690 "transport_tos": 0, 00:04:36.690 "nvme_error_stat": false, 00:04:36.690 "rdma_srq_size": 0, 00:04:36.690 "io_path_stat": false, 00:04:36.690 "allow_accel_sequence": false, 00:04:36.690 "rdma_max_cq_size": 0, 00:04:36.690 "rdma_cm_event_timeout_ms": 0, 00:04:36.690 "dhchap_digests": [ 00:04:36.690 "sha256", 00:04:36.690 "sha384", 00:04:36.690 "sha512" 00:04:36.690 ], 00:04:36.690 "dhchap_dhgroups": [ 00:04:36.690 "null", 00:04:36.690 "ffdhe2048", 00:04:36.690 "ffdhe3072", 00:04:36.690 "ffdhe4096", 00:04:36.690 "ffdhe6144", 00:04:36.691 "ffdhe8192" 00:04:36.691 ] 00:04:36.691 } 00:04:36.691 }, 00:04:36.691 { 00:04:36.691 "method": "bdev_nvme_set_hotplug", 00:04:36.691 "params": { 00:04:36.691 "period_us": 100000, 00:04:36.691 "enable": false 00:04:36.691 } 00:04:36.691 }, 00:04:36.691 { 00:04:36.691 "method": "bdev_wait_for_examine" 00:04:36.691 } 00:04:36.691 ] 00:04:36.691 }, 00:04:36.691 { 00:04:36.691 "subsystem": "scsi", 00:04:36.691 "config": null 00:04:36.691 }, 00:04:36.691 { 00:04:36.691 "subsystem": "scheduler", 00:04:36.691 "config": [ 00:04:36.691 { 00:04:36.691 "method": "framework_set_scheduler", 00:04:36.691 "params": { 00:04:36.691 "name": "static" 00:04:36.691 } 00:04:36.691 } 00:04:36.691 ] 00:04:36.691 }, 00:04:36.691 { 00:04:36.691 "subsystem": "vhost_scsi", 00:04:36.691 "config": [] 00:04:36.691 }, 00:04:36.691 { 00:04:36.691 "subsystem": "vhost_blk", 00:04:36.691 "config": [] 00:04:36.691 }, 00:04:36.691 { 00:04:36.691 "subsystem": "ublk", 00:04:36.691 "config": [] 00:04:36.691 }, 00:04:36.691 { 00:04:36.691 "subsystem": "nbd", 00:04:36.691 "config": [] 00:04:36.691 }, 00:04:36.691 { 00:04:36.691 "subsystem": "nvmf", 00:04:36.691 "config": [ 00:04:36.691 { 00:04:36.691 "method": "nvmf_set_config", 00:04:36.691 "params": { 00:04:36.691 "discovery_filter": "match_any", 00:04:36.691 "admin_cmd_passthru": { 00:04:36.691 "identify_ctrlr": false 00:04:36.691 }, 00:04:36.691 "dhchap_digests": [ 00:04:36.691 "sha256", 00:04:36.691 "sha384", 00:04:36.691 "sha512" 00:04:36.691 ], 00:04:36.691 "dhchap_dhgroups": [ 00:04:36.691 "null", 00:04:36.691 "ffdhe2048", 00:04:36.691 "ffdhe3072", 00:04:36.691 "ffdhe4096", 00:04:36.691 "ffdhe6144", 00:04:36.691 "ffdhe8192" 00:04:36.691 ] 00:04:36.691 } 00:04:36.691 }, 00:04:36.691 { 00:04:36.691 "method": "nvmf_set_max_subsystems", 00:04:36.691 "params": { 00:04:36.691 "max_subsystems": 1024 00:04:36.691 } 00:04:36.691 }, 00:04:36.691 { 00:04:36.691 "method": "nvmf_set_crdt", 00:04:36.691 "params": { 00:04:36.691 "crdt1": 0, 00:04:36.691 "crdt2": 0, 00:04:36.691 "crdt3": 0 00:04:36.691 } 00:04:36.691 }, 00:04:36.691 { 00:04:36.691 "method": "nvmf_create_transport", 00:04:36.691 "params": { 00:04:36.691 "trtype": "TCP", 00:04:36.691 "max_queue_depth": 128, 00:04:36.691 "max_io_qpairs_per_ctrlr": 127, 00:04:36.691 "in_capsule_data_size": 4096, 00:04:36.691 "max_io_size": 131072, 00:04:36.691 "io_unit_size": 131072, 00:04:36.691 "max_aq_depth": 128, 00:04:36.691 "num_shared_buffers": 511, 00:04:36.691 "buf_cache_size": 4294967295, 00:04:36.691 "dif_insert_or_strip": false, 00:04:36.691 "zcopy": false, 00:04:36.691 "c2h_success": true, 00:04:36.691 "sock_priority": 0, 00:04:36.691 "abort_timeout_sec": 1, 00:04:36.691 "ack_timeout": 0, 00:04:36.691 "data_wr_pool_size": 0 00:04:36.691 } 00:04:36.691 } 00:04:36.691 ] 00:04:36.691 }, 00:04:36.691 { 00:04:36.691 "subsystem": "iscsi", 00:04:36.691 "config": [ 00:04:36.691 { 00:04:36.691 "method": "iscsi_set_options", 00:04:36.691 "params": { 00:04:36.691 "node_base": "iqn.2016-06.io.spdk", 00:04:36.691 "max_sessions": 128, 00:04:36.691 "max_connections_per_session": 2, 00:04:36.691 "max_queue_depth": 64, 00:04:36.691 "default_time2wait": 2, 00:04:36.691 "default_time2retain": 20, 00:04:36.691 "first_burst_length": 8192, 00:04:36.691 "immediate_data": true, 00:04:36.691 "allow_duplicated_isid": false, 00:04:36.691 "error_recovery_level": 0, 00:04:36.691 "nop_timeout": 60, 00:04:36.691 "nop_in_interval": 30, 00:04:36.691 "disable_chap": false, 00:04:36.691 "require_chap": false, 00:04:36.691 "mutual_chap": false, 00:04:36.691 "chap_group": 0, 00:04:36.691 "max_large_datain_per_connection": 64, 00:04:36.691 "max_r2t_per_connection": 4, 00:04:36.691 "pdu_pool_size": 36864, 00:04:36.691 "immediate_data_pool_size": 16384, 00:04:36.691 "data_out_pool_size": 2048 00:04:36.691 } 00:04:36.691 } 00:04:36.691 ] 00:04:36.691 } 00:04:36.691 ] 00:04:36.691 } 00:04:36.691 10:49:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:36.691 10:49:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57335 00:04:36.691 10:49:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57335 ']' 00:04:36.691 10:49:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57335 00:04:36.691 10:49:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:36.691 10:49:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:36.691 10:49:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57335 00:04:36.691 10:49:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:36.691 10:49:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:36.691 killing process with pid 57335 00:04:36.691 10:49:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57335' 00:04:36.691 10:49:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57335 00:04:36.691 10:49:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57335 00:04:39.233 10:49:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57391 00:04:39.233 10:49:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:39.233 10:49:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:44.495 10:49:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57391 00:04:44.495 10:49:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57391 ']' 00:04:44.495 10:49:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57391 00:04:44.495 10:49:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:44.495 10:49:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:44.495 10:49:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57391 00:04:44.495 10:49:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:44.495 10:49:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:44.495 killing process with pid 57391 00:04:44.495 10:49:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57391' 00:04:44.495 10:49:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57391 00:04:44.495 10:49:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57391 00:04:47.030 10:49:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:47.030 10:49:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:47.030 00:04:47.030 real 0m11.593s 00:04:47.030 user 0m11.009s 00:04:47.030 sys 0m0.891s 00:04:47.030 10:49:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:47.030 10:49:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.030 ************************************ 00:04:47.030 END TEST skip_rpc_with_json 00:04:47.030 ************************************ 00:04:47.030 10:49:53 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:47.030 10:49:53 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:47.030 10:49:53 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:47.030 10:49:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.030 ************************************ 00:04:47.030 START TEST skip_rpc_with_delay 00:04:47.030 ************************************ 00:04:47.030 10:49:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:04:47.030 10:49:53 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:47.030 10:49:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:47.030 10:49:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:47.031 10:49:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:47.031 10:49:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:47.031 10:49:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:47.031 10:49:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:47.031 10:49:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:47.031 10:49:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:47.031 10:49:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:47.031 10:49:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:47.031 10:49:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:47.031 [2024-11-15 10:49:53.708547] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:47.031 10:49:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:47.031 10:49:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:47.031 10:49:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:47.031 10:49:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:47.031 00:04:47.031 real 0m0.166s 00:04:47.031 user 0m0.095s 00:04:47.031 sys 0m0.070s 00:04:47.031 10:49:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:47.031 10:49:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:47.031 ************************************ 00:04:47.031 END TEST skip_rpc_with_delay 00:04:47.031 ************************************ 00:04:47.031 10:49:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:47.031 10:49:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:47.031 10:49:53 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:47.031 10:49:53 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:47.031 10:49:53 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:47.031 10:49:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.031 ************************************ 00:04:47.031 START TEST exit_on_failed_rpc_init 00:04:47.031 ************************************ 00:04:47.031 10:49:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:04:47.031 10:49:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57530 00:04:47.031 10:49:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:47.031 10:49:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57530 00:04:47.031 10:49:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57530 ']' 00:04:47.031 10:49:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.031 10:49:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:47.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.031 10:49:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.031 10:49:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:47.031 10:49:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:47.031 [2024-11-15 10:49:53.939703] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:04:47.031 [2024-11-15 10:49:53.939834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57530 ] 00:04:47.289 [2024-11-15 10:49:54.115395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.547 [2024-11-15 10:49:54.236239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.483 10:49:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:48.483 10:49:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:04:48.483 10:49:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.483 10:49:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:48.483 10:49:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:48.483 10:49:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:48.483 10:49:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.483 10:49:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.483 10:49:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.483 10:49:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.483 10:49:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.483 10:49:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.483 10:49:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.483 10:49:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:48.483 10:49:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:48.483 [2024-11-15 10:49:55.270641] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:04:48.483 [2024-11-15 10:49:55.270764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57548 ] 00:04:48.741 [2024-11-15 10:49:55.449534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.741 [2024-11-15 10:49:55.583553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.741 [2024-11-15 10:49:55.583666] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:48.741 [2024-11-15 10:49:55.583682] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:48.741 [2024-11-15 10:49:55.583697] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:48.999 10:49:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:48.999 10:49:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:48.999 10:49:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:48.999 10:49:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:48.999 10:49:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:49.000 10:49:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:49.000 10:49:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:49.000 10:49:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57530 00:04:49.000 10:49:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57530 ']' 00:04:49.000 10:49:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57530 00:04:49.000 10:49:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:04:49.000 10:49:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:49.000 10:49:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57530 00:04:49.000 10:49:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:49.000 10:49:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:49.000 killing process with pid 57530 00:04:49.000 10:49:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57530' 00:04:49.000 10:49:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57530 00:04:49.000 10:49:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57530 00:04:51.529 00:04:51.529 real 0m4.585s 00:04:51.529 user 0m5.008s 00:04:51.529 sys 0m0.581s 00:04:51.529 10:49:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:51.529 10:49:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:51.529 ************************************ 00:04:51.529 END TEST exit_on_failed_rpc_init 00:04:51.529 ************************************ 00:04:51.788 10:49:58 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:51.788 00:04:51.788 real 0m24.399s 00:04:51.788 user 0m23.421s 00:04:51.788 sys 0m2.226s 00:04:51.788 10:49:58 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:51.788 10:49:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.788 ************************************ 00:04:51.788 END TEST skip_rpc 00:04:51.788 ************************************ 00:04:51.788 10:49:58 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:51.788 10:49:58 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:51.788 10:49:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:51.788 10:49:58 -- common/autotest_common.sh@10 -- # set +x 00:04:51.788 ************************************ 00:04:51.788 START TEST rpc_client 00:04:51.788 ************************************ 00:04:51.788 10:49:58 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:51.788 * Looking for test storage... 00:04:51.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:51.788 10:49:58 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:51.788 10:49:58 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:51.788 10:49:58 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:52.051 10:49:58 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:52.051 10:49:58 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.051 10:49:58 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.051 10:49:58 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.051 10:49:58 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.051 10:49:58 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.051 10:49:58 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.051 10:49:58 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.051 10:49:58 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.051 10:49:58 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.051 10:49:58 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.051 10:49:58 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.051 10:49:58 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:52.051 10:49:58 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:52.051 10:49:58 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.051 10:49:58 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.051 10:49:58 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:52.051 10:49:58 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:52.051 10:49:58 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.051 10:49:58 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:52.051 10:49:58 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.051 10:49:58 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:52.051 10:49:58 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:52.051 10:49:58 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.051 10:49:58 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:52.051 10:49:58 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.051 10:49:58 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.051 10:49:58 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.051 10:49:58 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:52.051 10:49:58 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.051 10:49:58 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:52.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.051 --rc genhtml_branch_coverage=1 00:04:52.051 --rc genhtml_function_coverage=1 00:04:52.051 --rc genhtml_legend=1 00:04:52.051 --rc geninfo_all_blocks=1 00:04:52.051 --rc geninfo_unexecuted_blocks=1 00:04:52.051 00:04:52.051 ' 00:04:52.051 10:49:58 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:52.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.051 --rc genhtml_branch_coverage=1 00:04:52.051 --rc genhtml_function_coverage=1 00:04:52.051 --rc genhtml_legend=1 00:04:52.051 --rc geninfo_all_blocks=1 00:04:52.051 --rc geninfo_unexecuted_blocks=1 00:04:52.051 00:04:52.051 ' 00:04:52.051 10:49:58 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:52.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.051 --rc genhtml_branch_coverage=1 00:04:52.051 --rc genhtml_function_coverage=1 00:04:52.051 --rc genhtml_legend=1 00:04:52.051 --rc geninfo_all_blocks=1 00:04:52.051 --rc geninfo_unexecuted_blocks=1 00:04:52.051 00:04:52.051 ' 00:04:52.051 10:49:58 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:52.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.051 --rc genhtml_branch_coverage=1 00:04:52.051 --rc genhtml_function_coverage=1 00:04:52.052 --rc genhtml_legend=1 00:04:52.052 --rc geninfo_all_blocks=1 00:04:52.052 --rc geninfo_unexecuted_blocks=1 00:04:52.052 00:04:52.052 ' 00:04:52.052 10:49:58 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:52.052 OK 00:04:52.052 10:49:58 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:52.052 00:04:52.052 real 0m0.302s 00:04:52.052 user 0m0.181s 00:04:52.052 sys 0m0.138s 00:04:52.052 10:49:58 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:52.052 10:49:58 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:52.052 ************************************ 00:04:52.052 END TEST rpc_client 00:04:52.052 ************************************ 00:04:52.052 10:49:58 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:52.052 10:49:58 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:52.052 10:49:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:52.052 10:49:58 -- common/autotest_common.sh@10 -- # set +x 00:04:52.052 ************************************ 00:04:52.052 START TEST json_config 00:04:52.052 ************************************ 00:04:52.052 10:49:58 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:52.323 10:49:58 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:52.323 10:49:58 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:52.323 10:49:58 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:52.323 10:49:59 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:52.323 10:49:59 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.323 10:49:59 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.323 10:49:59 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.323 10:49:59 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.323 10:49:59 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.323 10:49:59 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.323 10:49:59 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.323 10:49:59 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.323 10:49:59 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.323 10:49:59 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.323 10:49:59 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.323 10:49:59 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:52.323 10:49:59 json_config -- scripts/common.sh@345 -- # : 1 00:04:52.323 10:49:59 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.323 10:49:59 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.323 10:49:59 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:52.323 10:49:59 json_config -- scripts/common.sh@353 -- # local d=1 00:04:52.323 10:49:59 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.323 10:49:59 json_config -- scripts/common.sh@355 -- # echo 1 00:04:52.323 10:49:59 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.323 10:49:59 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:52.323 10:49:59 json_config -- scripts/common.sh@353 -- # local d=2 00:04:52.323 10:49:59 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.323 10:49:59 json_config -- scripts/common.sh@355 -- # echo 2 00:04:52.323 10:49:59 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.323 10:49:59 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.323 10:49:59 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.323 10:49:59 json_config -- scripts/common.sh@368 -- # return 0 00:04:52.323 10:49:59 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.323 10:49:59 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:52.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.323 --rc genhtml_branch_coverage=1 00:04:52.323 --rc genhtml_function_coverage=1 00:04:52.323 --rc genhtml_legend=1 00:04:52.323 --rc geninfo_all_blocks=1 00:04:52.323 --rc geninfo_unexecuted_blocks=1 00:04:52.323 00:04:52.323 ' 00:04:52.323 10:49:59 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:52.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.323 --rc genhtml_branch_coverage=1 00:04:52.323 --rc genhtml_function_coverage=1 00:04:52.323 --rc genhtml_legend=1 00:04:52.323 --rc geninfo_all_blocks=1 00:04:52.323 --rc geninfo_unexecuted_blocks=1 00:04:52.323 00:04:52.323 ' 00:04:52.323 10:49:59 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:52.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.323 --rc genhtml_branch_coverage=1 00:04:52.323 --rc genhtml_function_coverage=1 00:04:52.323 --rc genhtml_legend=1 00:04:52.323 --rc geninfo_all_blocks=1 00:04:52.323 --rc geninfo_unexecuted_blocks=1 00:04:52.323 00:04:52.323 ' 00:04:52.323 10:49:59 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:52.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.323 --rc genhtml_branch_coverage=1 00:04:52.324 --rc genhtml_function_coverage=1 00:04:52.324 --rc genhtml_legend=1 00:04:52.324 --rc geninfo_all_blocks=1 00:04:52.324 --rc geninfo_unexecuted_blocks=1 00:04:52.324 00:04:52.324 ' 00:04:52.324 10:49:59 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:52.324 10:49:59 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:52.324 10:49:59 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:52.324 10:49:59 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:52.324 10:49:59 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:52.324 10:49:59 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:52.324 10:49:59 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:52.324 10:49:59 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:52.324 10:49:59 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:52.324 10:49:59 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:52.324 10:49:59 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:52.324 10:49:59 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:52.324 10:49:59 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:25153ce7-b438-470b-af0e-c451b6522a73 00:04:52.324 10:49:59 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=25153ce7-b438-470b-af0e-c451b6522a73 00:04:52.324 10:49:59 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:52.324 10:49:59 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:52.324 10:49:59 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:52.324 10:49:59 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:52.324 10:49:59 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:52.324 10:49:59 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:52.324 10:49:59 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:52.324 10:49:59 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:52.324 10:49:59 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:52.324 10:49:59 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.324 10:49:59 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.324 10:49:59 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.324 10:49:59 json_config -- paths/export.sh@5 -- # export PATH 00:04:52.324 10:49:59 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.324 10:49:59 json_config -- nvmf/common.sh@51 -- # : 0 00:04:52.324 10:49:59 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:52.324 10:49:59 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:52.324 10:49:59 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:52.324 10:49:59 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:52.324 10:49:59 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:52.324 10:49:59 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:52.324 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:52.324 10:49:59 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:52.324 10:49:59 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:52.324 10:49:59 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:52.324 10:49:59 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:52.324 10:49:59 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:52.324 10:49:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:52.324 10:49:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:52.324 10:49:59 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:52.324 WARNING: No tests are enabled so not running JSON configuration tests 00:04:52.324 10:49:59 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:52.324 10:49:59 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:52.324 00:04:52.324 real 0m0.215s 00:04:52.324 user 0m0.144s 00:04:52.324 sys 0m0.081s 00:04:52.324 10:49:59 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:52.324 10:49:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.324 ************************************ 00:04:52.324 END TEST json_config 00:04:52.324 ************************************ 00:04:52.324 10:49:59 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:52.324 10:49:59 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:52.324 10:49:59 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:52.324 10:49:59 -- common/autotest_common.sh@10 -- # set +x 00:04:52.324 ************************************ 00:04:52.324 START TEST json_config_extra_key 00:04:52.324 ************************************ 00:04:52.324 10:49:59 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:52.584 10:49:59 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:52.584 10:49:59 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:52.584 10:49:59 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:52.584 10:49:59 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:52.584 10:49:59 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.584 10:49:59 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:52.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.584 --rc genhtml_branch_coverage=1 00:04:52.584 --rc genhtml_function_coverage=1 00:04:52.584 --rc genhtml_legend=1 00:04:52.584 --rc geninfo_all_blocks=1 00:04:52.584 --rc geninfo_unexecuted_blocks=1 00:04:52.584 00:04:52.584 ' 00:04:52.584 10:49:59 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:52.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.584 --rc genhtml_branch_coverage=1 00:04:52.584 --rc genhtml_function_coverage=1 00:04:52.584 --rc genhtml_legend=1 00:04:52.584 --rc geninfo_all_blocks=1 00:04:52.584 --rc geninfo_unexecuted_blocks=1 00:04:52.584 00:04:52.584 ' 00:04:52.584 10:49:59 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:52.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.584 --rc genhtml_branch_coverage=1 00:04:52.584 --rc genhtml_function_coverage=1 00:04:52.584 --rc genhtml_legend=1 00:04:52.584 --rc geninfo_all_blocks=1 00:04:52.584 --rc geninfo_unexecuted_blocks=1 00:04:52.584 00:04:52.584 ' 00:04:52.584 10:49:59 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:52.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.584 --rc genhtml_branch_coverage=1 00:04:52.584 --rc genhtml_function_coverage=1 00:04:52.584 --rc genhtml_legend=1 00:04:52.584 --rc geninfo_all_blocks=1 00:04:52.584 --rc geninfo_unexecuted_blocks=1 00:04:52.584 00:04:52.584 ' 00:04:52.584 10:49:59 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:52.584 10:49:59 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:52.584 10:49:59 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:52.584 10:49:59 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:52.584 10:49:59 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:52.584 10:49:59 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:52.584 10:49:59 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:52.584 10:49:59 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:52.584 10:49:59 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:52.584 10:49:59 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:52.584 10:49:59 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:52.584 10:49:59 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:52.584 10:49:59 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:25153ce7-b438-470b-af0e-c451b6522a73 00:04:52.584 10:49:59 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=25153ce7-b438-470b-af0e-c451b6522a73 00:04:52.584 10:49:59 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:52.584 10:49:59 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:52.584 10:49:59 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:52.584 10:49:59 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:52.584 10:49:59 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:52.584 10:49:59 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:52.584 10:49:59 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.584 10:49:59 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.584 10:49:59 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.584 10:49:59 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:52.584 10:49:59 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.584 10:49:59 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:52.584 10:49:59 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:52.584 10:49:59 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:52.584 10:49:59 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:52.584 10:49:59 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:52.584 10:49:59 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:52.584 10:49:59 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:52.584 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:52.584 10:49:59 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:52.584 10:49:59 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:52.584 10:49:59 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:52.584 10:49:59 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:52.584 10:49:59 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:52.584 10:49:59 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:52.584 10:49:59 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:52.584 10:49:59 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:52.584 10:49:59 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:52.585 10:49:59 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:52.585 10:49:59 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:52.585 10:49:59 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:52.585 10:49:59 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:52.585 INFO: launching applications... 00:04:52.585 10:49:59 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:52.585 10:49:59 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:52.585 10:49:59 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:52.585 10:49:59 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:52.585 10:49:59 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:52.585 10:49:59 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:52.585 10:49:59 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:52.585 10:49:59 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:52.585 10:49:59 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:52.585 10:49:59 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57758 00:04:52.585 Waiting for target to run... 00:04:52.585 10:49:59 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:52.585 10:49:59 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57758 /var/tmp/spdk_tgt.sock 00:04:52.585 10:49:59 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57758 ']' 00:04:52.585 10:49:59 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:52.585 10:49:59 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:52.585 10:49:59 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:52.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:52.585 10:49:59 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:52.585 10:49:59 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:52.585 10:49:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:52.843 [2024-11-15 10:49:59.529720] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:04:52.843 [2024-11-15 10:49:59.529851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57758 ] 00:04:53.102 [2024-11-15 10:49:59.939549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.360 [2024-11-15 10:50:00.049467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.927 10:50:00 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:53.927 10:50:00 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:53.927 00:04:53.927 10:50:00 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:53.927 INFO: shutting down applications... 00:04:53.927 10:50:00 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:53.927 10:50:00 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:53.927 10:50:00 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:53.927 10:50:00 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:53.927 10:50:00 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57758 ]] 00:04:53.927 10:50:00 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57758 00:04:53.927 10:50:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:53.927 10:50:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.927 10:50:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57758 00:04:53.927 10:50:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:54.495 10:50:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:54.495 10:50:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:54.495 10:50:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57758 00:04:54.495 10:50:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:55.065 10:50:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:55.065 10:50:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:55.065 10:50:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57758 00:04:55.065 10:50:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:55.632 10:50:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:55.632 10:50:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:55.632 10:50:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57758 00:04:55.632 10:50:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:56.200 10:50:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:56.200 10:50:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.200 10:50:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57758 00:04:56.200 10:50:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:56.459 10:50:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:56.459 10:50:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.459 10:50:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57758 00:04:56.459 10:50:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:57.025 10:50:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:57.025 10:50:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:57.025 10:50:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57758 00:04:57.025 10:50:03 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:57.025 10:50:03 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:57.025 10:50:03 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:57.025 SPDK target shutdown done 00:04:57.025 10:50:03 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:57.025 Success 00:04:57.025 10:50:03 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:57.025 00:04:57.025 real 0m4.669s 00:04:57.025 user 0m4.383s 00:04:57.025 sys 0m0.584s 00:04:57.025 10:50:03 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:57.025 10:50:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:57.025 ************************************ 00:04:57.025 END TEST json_config_extra_key 00:04:57.025 ************************************ 00:04:57.025 10:50:03 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:57.025 10:50:03 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:57.025 10:50:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:57.025 10:50:03 -- common/autotest_common.sh@10 -- # set +x 00:04:57.025 ************************************ 00:04:57.025 START TEST alias_rpc 00:04:57.025 ************************************ 00:04:57.025 10:50:03 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:57.285 * Looking for test storage... 00:04:57.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:57.285 10:50:04 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:57.285 10:50:04 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:57.285 10:50:04 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:57.285 10:50:04 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:57.285 10:50:04 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.285 10:50:04 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.285 10:50:04 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.285 10:50:04 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.285 10:50:04 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.285 10:50:04 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.285 10:50:04 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.285 10:50:04 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.285 10:50:04 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.285 10:50:04 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.285 10:50:04 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.285 10:50:04 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:57.285 10:50:04 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:57.285 10:50:04 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.285 10:50:04 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.285 10:50:04 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:57.285 10:50:04 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:57.285 10:50:04 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.285 10:50:04 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:57.285 10:50:04 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.285 10:50:04 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:57.285 10:50:04 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:57.285 10:50:04 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.285 10:50:04 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:57.285 10:50:04 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.285 10:50:04 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.285 10:50:04 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.285 10:50:04 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:57.285 10:50:04 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.285 10:50:04 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:57.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.285 --rc genhtml_branch_coverage=1 00:04:57.285 --rc genhtml_function_coverage=1 00:04:57.285 --rc genhtml_legend=1 00:04:57.285 --rc geninfo_all_blocks=1 00:04:57.285 --rc geninfo_unexecuted_blocks=1 00:04:57.285 00:04:57.285 ' 00:04:57.285 10:50:04 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:57.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.285 --rc genhtml_branch_coverage=1 00:04:57.285 --rc genhtml_function_coverage=1 00:04:57.285 --rc genhtml_legend=1 00:04:57.285 --rc geninfo_all_blocks=1 00:04:57.285 --rc geninfo_unexecuted_blocks=1 00:04:57.285 00:04:57.285 ' 00:04:57.285 10:50:04 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:57.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.285 --rc genhtml_branch_coverage=1 00:04:57.285 --rc genhtml_function_coverage=1 00:04:57.285 --rc genhtml_legend=1 00:04:57.285 --rc geninfo_all_blocks=1 00:04:57.285 --rc geninfo_unexecuted_blocks=1 00:04:57.285 00:04:57.285 ' 00:04:57.285 10:50:04 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:57.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.285 --rc genhtml_branch_coverage=1 00:04:57.285 --rc genhtml_function_coverage=1 00:04:57.285 --rc genhtml_legend=1 00:04:57.285 --rc geninfo_all_blocks=1 00:04:57.285 --rc geninfo_unexecuted_blocks=1 00:04:57.285 00:04:57.285 ' 00:04:57.285 10:50:04 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:57.285 10:50:04 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57875 00:04:57.285 10:50:04 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.285 10:50:04 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57875 00:04:57.285 10:50:04 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57875 ']' 00:04:57.285 10:50:04 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.285 10:50:04 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:57.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.285 10:50:04 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.285 10:50:04 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:57.285 10:50:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.544 [2024-11-15 10:50:04.269343] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:04:57.544 [2024-11-15 10:50:04.269467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57875 ] 00:04:57.544 [2024-11-15 10:50:04.444010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.803 [2024-11-15 10:50:04.561591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.738 10:50:05 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:58.738 10:50:05 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:58.738 10:50:05 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:58.998 10:50:05 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57875 00:04:58.998 10:50:05 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57875 ']' 00:04:58.998 10:50:05 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57875 00:04:58.998 10:50:05 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:04:58.998 10:50:05 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:58.998 10:50:05 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57875 00:04:58.998 killing process with pid 57875 00:04:58.998 10:50:05 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:58.998 10:50:05 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:58.998 10:50:05 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57875' 00:04:58.998 10:50:05 alias_rpc -- common/autotest_common.sh@971 -- # kill 57875 00:04:58.998 10:50:05 alias_rpc -- common/autotest_common.sh@976 -- # wait 57875 00:05:01.535 00:05:01.535 real 0m4.369s 00:05:01.535 user 0m4.414s 00:05:01.535 sys 0m0.589s 00:05:01.535 10:50:08 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:01.535 10:50:08 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.535 ************************************ 00:05:01.535 END TEST alias_rpc 00:05:01.535 ************************************ 00:05:01.535 10:50:08 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:01.535 10:50:08 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:01.535 10:50:08 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:01.535 10:50:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:01.535 10:50:08 -- common/autotest_common.sh@10 -- # set +x 00:05:01.535 ************************************ 00:05:01.535 START TEST spdkcli_tcp 00:05:01.535 ************************************ 00:05:01.535 10:50:08 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:01.795 * Looking for test storage... 00:05:01.795 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:01.795 10:50:08 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:01.795 10:50:08 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:01.795 10:50:08 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:01.795 10:50:08 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:01.795 10:50:08 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.795 10:50:08 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.795 10:50:08 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.795 10:50:08 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.795 10:50:08 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.795 10:50:08 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.795 10:50:08 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.795 10:50:08 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.795 10:50:08 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.795 10:50:08 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.795 10:50:08 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.795 10:50:08 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:01.795 10:50:08 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:01.795 10:50:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.795 10:50:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.795 10:50:08 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:01.795 10:50:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:01.795 10:50:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.795 10:50:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:01.795 10:50:08 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.795 10:50:08 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:01.795 10:50:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:01.795 10:50:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.795 10:50:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:01.795 10:50:08 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.795 10:50:08 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.795 10:50:08 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.795 10:50:08 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:01.795 10:50:08 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.795 10:50:08 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:01.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.795 --rc genhtml_branch_coverage=1 00:05:01.795 --rc genhtml_function_coverage=1 00:05:01.795 --rc genhtml_legend=1 00:05:01.795 --rc geninfo_all_blocks=1 00:05:01.795 --rc geninfo_unexecuted_blocks=1 00:05:01.795 00:05:01.795 ' 00:05:01.795 10:50:08 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:01.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.795 --rc genhtml_branch_coverage=1 00:05:01.795 --rc genhtml_function_coverage=1 00:05:01.795 --rc genhtml_legend=1 00:05:01.795 --rc geninfo_all_blocks=1 00:05:01.795 --rc geninfo_unexecuted_blocks=1 00:05:01.795 00:05:01.795 ' 00:05:01.795 10:50:08 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:01.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.795 --rc genhtml_branch_coverage=1 00:05:01.795 --rc genhtml_function_coverage=1 00:05:01.795 --rc genhtml_legend=1 00:05:01.795 --rc geninfo_all_blocks=1 00:05:01.795 --rc geninfo_unexecuted_blocks=1 00:05:01.795 00:05:01.795 ' 00:05:01.795 10:50:08 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:01.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.795 --rc genhtml_branch_coverage=1 00:05:01.795 --rc genhtml_function_coverage=1 00:05:01.795 --rc genhtml_legend=1 00:05:01.795 --rc geninfo_all_blocks=1 00:05:01.795 --rc geninfo_unexecuted_blocks=1 00:05:01.795 00:05:01.795 ' 00:05:01.795 10:50:08 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:01.795 10:50:08 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:01.795 10:50:08 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:01.795 10:50:08 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:01.795 10:50:08 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:01.795 10:50:08 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:01.795 10:50:08 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:01.795 10:50:08 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:01.795 10:50:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.795 10:50:08 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57982 00:05:01.795 10:50:08 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:01.795 10:50:08 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57982 00:05:01.795 10:50:08 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 57982 ']' 00:05:01.795 10:50:08 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.795 10:50:08 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:01.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.795 10:50:08 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.795 10:50:08 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:01.795 10:50:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.795 [2024-11-15 10:50:08.696003] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:05:01.795 [2024-11-15 10:50:08.696118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57982 ] 00:05:02.054 [2024-11-15 10:50:08.872252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.313 [2024-11-15 10:50:09.006139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.313 [2024-11-15 10:50:09.006173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.249 10:50:09 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:03.249 10:50:09 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:05:03.249 10:50:09 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58005 00:05:03.249 10:50:09 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:03.249 10:50:09 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:03.249 [ 00:05:03.249 "bdev_malloc_delete", 00:05:03.249 "bdev_malloc_create", 00:05:03.249 "bdev_null_resize", 00:05:03.249 "bdev_null_delete", 00:05:03.249 "bdev_null_create", 00:05:03.249 "bdev_nvme_cuse_unregister", 00:05:03.249 "bdev_nvme_cuse_register", 00:05:03.249 "bdev_opal_new_user", 00:05:03.249 "bdev_opal_set_lock_state", 00:05:03.249 "bdev_opal_delete", 00:05:03.249 "bdev_opal_get_info", 00:05:03.249 "bdev_opal_create", 00:05:03.249 "bdev_nvme_opal_revert", 00:05:03.250 "bdev_nvme_opal_init", 00:05:03.250 "bdev_nvme_send_cmd", 00:05:03.250 "bdev_nvme_set_keys", 00:05:03.250 "bdev_nvme_get_path_iostat", 00:05:03.250 "bdev_nvme_get_mdns_discovery_info", 00:05:03.250 "bdev_nvme_stop_mdns_discovery", 00:05:03.250 "bdev_nvme_start_mdns_discovery", 00:05:03.250 "bdev_nvme_set_multipath_policy", 00:05:03.250 "bdev_nvme_set_preferred_path", 00:05:03.250 "bdev_nvme_get_io_paths", 00:05:03.250 "bdev_nvme_remove_error_injection", 00:05:03.250 "bdev_nvme_add_error_injection", 00:05:03.250 "bdev_nvme_get_discovery_info", 00:05:03.250 "bdev_nvme_stop_discovery", 00:05:03.250 "bdev_nvme_start_discovery", 00:05:03.250 "bdev_nvme_get_controller_health_info", 00:05:03.250 "bdev_nvme_disable_controller", 00:05:03.250 "bdev_nvme_enable_controller", 00:05:03.250 "bdev_nvme_reset_controller", 00:05:03.250 "bdev_nvme_get_transport_statistics", 00:05:03.250 "bdev_nvme_apply_firmware", 00:05:03.250 "bdev_nvme_detach_controller", 00:05:03.250 "bdev_nvme_get_controllers", 00:05:03.250 "bdev_nvme_attach_controller", 00:05:03.250 "bdev_nvme_set_hotplug", 00:05:03.250 "bdev_nvme_set_options", 00:05:03.250 "bdev_passthru_delete", 00:05:03.250 "bdev_passthru_create", 00:05:03.250 "bdev_lvol_set_parent_bdev", 00:05:03.250 "bdev_lvol_set_parent", 00:05:03.250 "bdev_lvol_check_shallow_copy", 00:05:03.250 "bdev_lvol_start_shallow_copy", 00:05:03.250 "bdev_lvol_grow_lvstore", 00:05:03.250 "bdev_lvol_get_lvols", 00:05:03.250 "bdev_lvol_get_lvstores", 00:05:03.250 "bdev_lvol_delete", 00:05:03.250 "bdev_lvol_set_read_only", 00:05:03.250 "bdev_lvol_resize", 00:05:03.250 "bdev_lvol_decouple_parent", 00:05:03.250 "bdev_lvol_inflate", 00:05:03.250 "bdev_lvol_rename", 00:05:03.250 "bdev_lvol_clone_bdev", 00:05:03.250 "bdev_lvol_clone", 00:05:03.250 "bdev_lvol_snapshot", 00:05:03.250 "bdev_lvol_create", 00:05:03.250 "bdev_lvol_delete_lvstore", 00:05:03.250 "bdev_lvol_rename_lvstore", 00:05:03.250 "bdev_lvol_create_lvstore", 00:05:03.250 "bdev_raid_set_options", 00:05:03.250 "bdev_raid_remove_base_bdev", 00:05:03.250 "bdev_raid_add_base_bdev", 00:05:03.250 "bdev_raid_delete", 00:05:03.250 "bdev_raid_create", 00:05:03.250 "bdev_raid_get_bdevs", 00:05:03.250 "bdev_error_inject_error", 00:05:03.250 "bdev_error_delete", 00:05:03.250 "bdev_error_create", 00:05:03.250 "bdev_split_delete", 00:05:03.250 "bdev_split_create", 00:05:03.250 "bdev_delay_delete", 00:05:03.250 "bdev_delay_create", 00:05:03.250 "bdev_delay_update_latency", 00:05:03.250 "bdev_zone_block_delete", 00:05:03.250 "bdev_zone_block_create", 00:05:03.250 "blobfs_create", 00:05:03.250 "blobfs_detect", 00:05:03.250 "blobfs_set_cache_size", 00:05:03.250 "bdev_aio_delete", 00:05:03.250 "bdev_aio_rescan", 00:05:03.250 "bdev_aio_create", 00:05:03.250 "bdev_ftl_set_property", 00:05:03.250 "bdev_ftl_get_properties", 00:05:03.250 "bdev_ftl_get_stats", 00:05:03.250 "bdev_ftl_unmap", 00:05:03.250 "bdev_ftl_unload", 00:05:03.250 "bdev_ftl_delete", 00:05:03.250 "bdev_ftl_load", 00:05:03.250 "bdev_ftl_create", 00:05:03.250 "bdev_virtio_attach_controller", 00:05:03.250 "bdev_virtio_scsi_get_devices", 00:05:03.250 "bdev_virtio_detach_controller", 00:05:03.250 "bdev_virtio_blk_set_hotplug", 00:05:03.250 "bdev_iscsi_delete", 00:05:03.250 "bdev_iscsi_create", 00:05:03.250 "bdev_iscsi_set_options", 00:05:03.250 "accel_error_inject_error", 00:05:03.250 "ioat_scan_accel_module", 00:05:03.250 "dsa_scan_accel_module", 00:05:03.250 "iaa_scan_accel_module", 00:05:03.250 "keyring_file_remove_key", 00:05:03.250 "keyring_file_add_key", 00:05:03.250 "keyring_linux_set_options", 00:05:03.250 "fsdev_aio_delete", 00:05:03.250 "fsdev_aio_create", 00:05:03.250 "iscsi_get_histogram", 00:05:03.250 "iscsi_enable_histogram", 00:05:03.250 "iscsi_set_options", 00:05:03.250 "iscsi_get_auth_groups", 00:05:03.250 "iscsi_auth_group_remove_secret", 00:05:03.250 "iscsi_auth_group_add_secret", 00:05:03.250 "iscsi_delete_auth_group", 00:05:03.250 "iscsi_create_auth_group", 00:05:03.250 "iscsi_set_discovery_auth", 00:05:03.250 "iscsi_get_options", 00:05:03.250 "iscsi_target_node_request_logout", 00:05:03.250 "iscsi_target_node_set_redirect", 00:05:03.250 "iscsi_target_node_set_auth", 00:05:03.250 "iscsi_target_node_add_lun", 00:05:03.250 "iscsi_get_stats", 00:05:03.250 "iscsi_get_connections", 00:05:03.250 "iscsi_portal_group_set_auth", 00:05:03.250 "iscsi_start_portal_group", 00:05:03.250 "iscsi_delete_portal_group", 00:05:03.250 "iscsi_create_portal_group", 00:05:03.250 "iscsi_get_portal_groups", 00:05:03.250 "iscsi_delete_target_node", 00:05:03.250 "iscsi_target_node_remove_pg_ig_maps", 00:05:03.250 "iscsi_target_node_add_pg_ig_maps", 00:05:03.250 "iscsi_create_target_node", 00:05:03.250 "iscsi_get_target_nodes", 00:05:03.250 "iscsi_delete_initiator_group", 00:05:03.250 "iscsi_initiator_group_remove_initiators", 00:05:03.250 "iscsi_initiator_group_add_initiators", 00:05:03.250 "iscsi_create_initiator_group", 00:05:03.250 "iscsi_get_initiator_groups", 00:05:03.250 "nvmf_set_crdt", 00:05:03.250 "nvmf_set_config", 00:05:03.250 "nvmf_set_max_subsystems", 00:05:03.250 "nvmf_stop_mdns_prr", 00:05:03.250 "nvmf_publish_mdns_prr", 00:05:03.250 "nvmf_subsystem_get_listeners", 00:05:03.250 "nvmf_subsystem_get_qpairs", 00:05:03.250 "nvmf_subsystem_get_controllers", 00:05:03.250 "nvmf_get_stats", 00:05:03.250 "nvmf_get_transports", 00:05:03.250 "nvmf_create_transport", 00:05:03.250 "nvmf_get_targets", 00:05:03.250 "nvmf_delete_target", 00:05:03.250 "nvmf_create_target", 00:05:03.250 "nvmf_subsystem_allow_any_host", 00:05:03.250 "nvmf_subsystem_set_keys", 00:05:03.250 "nvmf_subsystem_remove_host", 00:05:03.250 "nvmf_subsystem_add_host", 00:05:03.250 "nvmf_ns_remove_host", 00:05:03.250 "nvmf_ns_add_host", 00:05:03.250 "nvmf_subsystem_remove_ns", 00:05:03.250 "nvmf_subsystem_set_ns_ana_group", 00:05:03.250 "nvmf_subsystem_add_ns", 00:05:03.250 "nvmf_subsystem_listener_set_ana_state", 00:05:03.250 "nvmf_discovery_get_referrals", 00:05:03.250 "nvmf_discovery_remove_referral", 00:05:03.250 "nvmf_discovery_add_referral", 00:05:03.250 "nvmf_subsystem_remove_listener", 00:05:03.250 "nvmf_subsystem_add_listener", 00:05:03.250 "nvmf_delete_subsystem", 00:05:03.250 "nvmf_create_subsystem", 00:05:03.250 "nvmf_get_subsystems", 00:05:03.250 "env_dpdk_get_mem_stats", 00:05:03.250 "nbd_get_disks", 00:05:03.250 "nbd_stop_disk", 00:05:03.250 "nbd_start_disk", 00:05:03.250 "ublk_recover_disk", 00:05:03.250 "ublk_get_disks", 00:05:03.250 "ublk_stop_disk", 00:05:03.250 "ublk_start_disk", 00:05:03.250 "ublk_destroy_target", 00:05:03.250 "ublk_create_target", 00:05:03.250 "virtio_blk_create_transport", 00:05:03.250 "virtio_blk_get_transports", 00:05:03.250 "vhost_controller_set_coalescing", 00:05:03.250 "vhost_get_controllers", 00:05:03.250 "vhost_delete_controller", 00:05:03.250 "vhost_create_blk_controller", 00:05:03.250 "vhost_scsi_controller_remove_target", 00:05:03.250 "vhost_scsi_controller_add_target", 00:05:03.250 "vhost_start_scsi_controller", 00:05:03.250 "vhost_create_scsi_controller", 00:05:03.250 "thread_set_cpumask", 00:05:03.250 "scheduler_set_options", 00:05:03.250 "framework_get_governor", 00:05:03.250 "framework_get_scheduler", 00:05:03.250 "framework_set_scheduler", 00:05:03.250 "framework_get_reactors", 00:05:03.250 "thread_get_io_channels", 00:05:03.250 "thread_get_pollers", 00:05:03.250 "thread_get_stats", 00:05:03.250 "framework_monitor_context_switch", 00:05:03.250 "spdk_kill_instance", 00:05:03.250 "log_enable_timestamps", 00:05:03.250 "log_get_flags", 00:05:03.250 "log_clear_flag", 00:05:03.250 "log_set_flag", 00:05:03.250 "log_get_level", 00:05:03.250 "log_set_level", 00:05:03.250 "log_get_print_level", 00:05:03.250 "log_set_print_level", 00:05:03.250 "framework_enable_cpumask_locks", 00:05:03.250 "framework_disable_cpumask_locks", 00:05:03.250 "framework_wait_init", 00:05:03.250 "framework_start_init", 00:05:03.250 "scsi_get_devices", 00:05:03.250 "bdev_get_histogram", 00:05:03.250 "bdev_enable_histogram", 00:05:03.250 "bdev_set_qos_limit", 00:05:03.250 "bdev_set_qd_sampling_period", 00:05:03.250 "bdev_get_bdevs", 00:05:03.250 "bdev_reset_iostat", 00:05:03.250 "bdev_get_iostat", 00:05:03.250 "bdev_examine", 00:05:03.250 "bdev_wait_for_examine", 00:05:03.250 "bdev_set_options", 00:05:03.250 "accel_get_stats", 00:05:03.250 "accel_set_options", 00:05:03.250 "accel_set_driver", 00:05:03.250 "accel_crypto_key_destroy", 00:05:03.250 "accel_crypto_keys_get", 00:05:03.250 "accel_crypto_key_create", 00:05:03.250 "accel_assign_opc", 00:05:03.250 "accel_get_module_info", 00:05:03.250 "accel_get_opc_assignments", 00:05:03.250 "vmd_rescan", 00:05:03.250 "vmd_remove_device", 00:05:03.250 "vmd_enable", 00:05:03.250 "sock_get_default_impl", 00:05:03.250 "sock_set_default_impl", 00:05:03.250 "sock_impl_set_options", 00:05:03.250 "sock_impl_get_options", 00:05:03.250 "iobuf_get_stats", 00:05:03.250 "iobuf_set_options", 00:05:03.250 "keyring_get_keys", 00:05:03.250 "framework_get_pci_devices", 00:05:03.250 "framework_get_config", 00:05:03.250 "framework_get_subsystems", 00:05:03.250 "fsdev_set_opts", 00:05:03.250 "fsdev_get_opts", 00:05:03.250 "trace_get_info", 00:05:03.250 "trace_get_tpoint_group_mask", 00:05:03.250 "trace_disable_tpoint_group", 00:05:03.250 "trace_enable_tpoint_group", 00:05:03.251 "trace_clear_tpoint_mask", 00:05:03.251 "trace_set_tpoint_mask", 00:05:03.251 "notify_get_notifications", 00:05:03.251 "notify_get_types", 00:05:03.251 "spdk_get_version", 00:05:03.251 "rpc_get_methods" 00:05:03.251 ] 00:05:03.251 10:50:10 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:03.251 10:50:10 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:03.251 10:50:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:03.509 10:50:10 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:03.509 10:50:10 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57982 00:05:03.509 10:50:10 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 57982 ']' 00:05:03.509 10:50:10 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 57982 00:05:03.509 10:50:10 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:05:03.509 10:50:10 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:03.509 10:50:10 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57982 00:05:03.509 killing process with pid 57982 00:05:03.509 10:50:10 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:03.509 10:50:10 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:03.509 10:50:10 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57982' 00:05:03.509 10:50:10 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 57982 00:05:03.509 10:50:10 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 57982 00:05:06.043 00:05:06.043 real 0m4.382s 00:05:06.043 user 0m7.870s 00:05:06.043 sys 0m0.650s 00:05:06.043 10:50:12 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:06.043 10:50:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:06.043 ************************************ 00:05:06.043 END TEST spdkcli_tcp 00:05:06.043 ************************************ 00:05:06.043 10:50:12 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:06.043 10:50:12 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:06.043 10:50:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:06.043 10:50:12 -- common/autotest_common.sh@10 -- # set +x 00:05:06.043 ************************************ 00:05:06.043 START TEST dpdk_mem_utility 00:05:06.043 ************************************ 00:05:06.043 10:50:12 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:06.043 * Looking for test storage... 00:05:06.043 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:06.043 10:50:12 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:06.043 10:50:12 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:06.043 10:50:12 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:06.338 10:50:13 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:06.338 10:50:13 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.338 10:50:13 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.338 10:50:13 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.338 10:50:13 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.338 10:50:13 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.338 10:50:13 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.338 10:50:13 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.338 10:50:13 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.338 10:50:13 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.338 10:50:13 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.338 10:50:13 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.338 10:50:13 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:06.338 10:50:13 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:06.338 10:50:13 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.338 10:50:13 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.338 10:50:13 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:06.338 10:50:13 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:06.338 10:50:13 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.338 10:50:13 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:06.338 10:50:13 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.338 10:50:13 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:06.338 10:50:13 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:06.338 10:50:13 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.338 10:50:13 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:06.338 10:50:13 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.338 10:50:13 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.338 10:50:13 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.338 10:50:13 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:06.338 10:50:13 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.338 10:50:13 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:06.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.338 --rc genhtml_branch_coverage=1 00:05:06.338 --rc genhtml_function_coverage=1 00:05:06.338 --rc genhtml_legend=1 00:05:06.338 --rc geninfo_all_blocks=1 00:05:06.338 --rc geninfo_unexecuted_blocks=1 00:05:06.338 00:05:06.338 ' 00:05:06.338 10:50:13 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:06.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.338 --rc genhtml_branch_coverage=1 00:05:06.338 --rc genhtml_function_coverage=1 00:05:06.338 --rc genhtml_legend=1 00:05:06.338 --rc geninfo_all_blocks=1 00:05:06.338 --rc geninfo_unexecuted_blocks=1 00:05:06.338 00:05:06.338 ' 00:05:06.338 10:50:13 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:06.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.338 --rc genhtml_branch_coverage=1 00:05:06.338 --rc genhtml_function_coverage=1 00:05:06.338 --rc genhtml_legend=1 00:05:06.338 --rc geninfo_all_blocks=1 00:05:06.338 --rc geninfo_unexecuted_blocks=1 00:05:06.338 00:05:06.338 ' 00:05:06.338 10:50:13 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:06.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.338 --rc genhtml_branch_coverage=1 00:05:06.338 --rc genhtml_function_coverage=1 00:05:06.338 --rc genhtml_legend=1 00:05:06.338 --rc geninfo_all_blocks=1 00:05:06.338 --rc geninfo_unexecuted_blocks=1 00:05:06.338 00:05:06.338 ' 00:05:06.338 10:50:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:06.338 10:50:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58110 00:05:06.338 10:50:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.339 10:50:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58110 00:05:06.339 10:50:13 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 58110 ']' 00:05:06.339 10:50:13 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.339 10:50:13 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:06.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.339 10:50:13 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.339 10:50:13 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:06.339 10:50:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:06.339 [2024-11-15 10:50:13.136433] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:05:06.339 [2024-11-15 10:50:13.136570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58110 ] 00:05:06.599 [2024-11-15 10:50:13.310745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.599 [2024-11-15 10:50:13.427856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.539 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:07.539 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:05:07.539 10:50:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:07.539 10:50:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:07.539 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.539 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:07.539 { 00:05:07.539 "filename": "/tmp/spdk_mem_dump.txt" 00:05:07.539 } 00:05:07.539 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.539 10:50:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:07.539 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:07.539 1 heaps totaling size 816.000000 MiB 00:05:07.539 size: 816.000000 MiB heap id: 0 00:05:07.539 end heaps---------- 00:05:07.539 9 mempools totaling size 595.772034 MiB 00:05:07.539 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:07.539 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:07.539 size: 92.545471 MiB name: bdev_io_58110 00:05:07.539 size: 50.003479 MiB name: msgpool_58110 00:05:07.539 size: 36.509338 MiB name: fsdev_io_58110 00:05:07.539 size: 21.763794 MiB name: PDU_Pool 00:05:07.539 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:07.539 size: 4.133484 MiB name: evtpool_58110 00:05:07.539 size: 0.026123 MiB name: Session_Pool 00:05:07.539 end mempools------- 00:05:07.539 6 memzones totaling size 4.142822 MiB 00:05:07.539 size: 1.000366 MiB name: RG_ring_0_58110 00:05:07.539 size: 1.000366 MiB name: RG_ring_1_58110 00:05:07.539 size: 1.000366 MiB name: RG_ring_4_58110 00:05:07.539 size: 1.000366 MiB name: RG_ring_5_58110 00:05:07.539 size: 0.125366 MiB name: RG_ring_2_58110 00:05:07.539 size: 0.015991 MiB name: RG_ring_3_58110 00:05:07.539 end memzones------- 00:05:07.539 10:50:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:07.539 heap id: 0 total size: 816.000000 MiB number of busy elements: 323 number of free elements: 18 00:05:07.539 list of free elements. size: 16.789429 MiB 00:05:07.539 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:07.539 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:07.539 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:07.539 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:07.539 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:07.539 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:07.539 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:07.539 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:07.539 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:07.539 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:07.539 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:07.539 element at address: 0x20001ac00000 with size: 0.559753 MiB 00:05:07.539 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:07.539 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:07.539 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:07.539 element at address: 0x200012c00000 with size: 0.443481 MiB 00:05:07.539 element at address: 0x200028000000 with size: 0.390442 MiB 00:05:07.539 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:07.539 list of standard malloc elements. size: 199.289673 MiB 00:05:07.539 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:07.539 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:07.539 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:07.539 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:07.539 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:07.539 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:07.539 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:07.539 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:07.539 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:07.539 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:07.539 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:07.539 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:07.539 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:07.539 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:07.539 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:07.539 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:07.539 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:07.539 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:07.539 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:07.539 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:07.539 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:07.539 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:07.539 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:07.539 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:07.539 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:07.539 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:07.539 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:07.539 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:07.539 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:07.539 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:07.539 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:07.539 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:07.539 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:07.539 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:07.539 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:07.539 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:07.540 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:07.540 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:07.540 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:07.540 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:07.540 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:07.540 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:07.540 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:07.540 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac8f4c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:07.540 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:07.541 element at address: 0x200028063f40 with size: 0.000244 MiB 00:05:07.541 element at address: 0x200028064040 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806af80 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806b080 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806b180 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:07.541 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:07.541 list of memzone associated elements. size: 599.920898 MiB 00:05:07.541 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:07.541 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:07.541 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:07.541 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:07.541 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:07.541 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58110_0 00:05:07.541 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:07.541 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58110_0 00:05:07.541 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:07.541 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58110_0 00:05:07.541 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:07.541 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:07.541 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:07.541 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:07.541 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:07.541 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58110_0 00:05:07.541 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:07.541 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58110 00:05:07.541 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:07.542 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58110 00:05:07.542 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:07.542 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:07.542 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:07.542 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:07.542 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:07.542 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:07.542 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:07.542 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:07.542 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:07.542 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58110 00:05:07.542 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:07.542 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58110 00:05:07.542 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:07.542 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58110 00:05:07.542 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:07.542 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58110 00:05:07.542 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:07.542 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58110 00:05:07.542 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:07.542 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58110 00:05:07.542 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:07.542 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:07.542 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:07.542 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:07.542 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:07.542 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:07.542 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:07.542 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58110 00:05:07.542 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:07.542 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58110 00:05:07.542 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:07.542 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:07.542 element at address: 0x200028064140 with size: 0.023804 MiB 00:05:07.542 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:07.542 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:07.542 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58110 00:05:07.542 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:05:07.542 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:07.542 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:07.542 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58110 00:05:07.542 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:07.542 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58110 00:05:07.542 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:07.542 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58110 00:05:07.542 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:05:07.542 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:07.542 10:50:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:07.542 10:50:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58110 00:05:07.542 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 58110 ']' 00:05:07.542 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 58110 00:05:07.542 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:05:07.542 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:07.542 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58110 00:05:07.801 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:07.801 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:07.801 killing process with pid 58110 00:05:07.801 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58110' 00:05:07.801 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 58110 00:05:07.801 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 58110 00:05:10.338 00:05:10.338 real 0m4.167s 00:05:10.338 user 0m4.152s 00:05:10.338 sys 0m0.586s 00:05:10.338 10:50:16 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:10.338 10:50:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:10.338 ************************************ 00:05:10.338 END TEST dpdk_mem_utility 00:05:10.338 ************************************ 00:05:10.338 10:50:17 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:10.338 10:50:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:10.338 10:50:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:10.338 10:50:17 -- common/autotest_common.sh@10 -- # set +x 00:05:10.338 ************************************ 00:05:10.338 START TEST event 00:05:10.338 ************************************ 00:05:10.339 10:50:17 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:10.339 * Looking for test storage... 00:05:10.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:10.339 10:50:17 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:10.339 10:50:17 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:10.339 10:50:17 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:10.339 10:50:17 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:10.339 10:50:17 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.339 10:50:17 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.339 10:50:17 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.339 10:50:17 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.339 10:50:17 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.339 10:50:17 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.339 10:50:17 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.339 10:50:17 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.339 10:50:17 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.339 10:50:17 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.339 10:50:17 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.339 10:50:17 event -- scripts/common.sh@344 -- # case "$op" in 00:05:10.339 10:50:17 event -- scripts/common.sh@345 -- # : 1 00:05:10.339 10:50:17 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.339 10:50:17 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.339 10:50:17 event -- scripts/common.sh@365 -- # decimal 1 00:05:10.339 10:50:17 event -- scripts/common.sh@353 -- # local d=1 00:05:10.339 10:50:17 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.339 10:50:17 event -- scripts/common.sh@355 -- # echo 1 00:05:10.339 10:50:17 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.339 10:50:17 event -- scripts/common.sh@366 -- # decimal 2 00:05:10.598 10:50:17 event -- scripts/common.sh@353 -- # local d=2 00:05:10.598 10:50:17 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.598 10:50:17 event -- scripts/common.sh@355 -- # echo 2 00:05:10.598 10:50:17 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.598 10:50:17 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.598 10:50:17 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.598 10:50:17 event -- scripts/common.sh@368 -- # return 0 00:05:10.598 10:50:17 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.598 10:50:17 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:10.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.598 --rc genhtml_branch_coverage=1 00:05:10.598 --rc genhtml_function_coverage=1 00:05:10.598 --rc genhtml_legend=1 00:05:10.598 --rc geninfo_all_blocks=1 00:05:10.598 --rc geninfo_unexecuted_blocks=1 00:05:10.598 00:05:10.598 ' 00:05:10.598 10:50:17 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:10.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.598 --rc genhtml_branch_coverage=1 00:05:10.598 --rc genhtml_function_coverage=1 00:05:10.598 --rc genhtml_legend=1 00:05:10.598 --rc geninfo_all_blocks=1 00:05:10.598 --rc geninfo_unexecuted_blocks=1 00:05:10.598 00:05:10.598 ' 00:05:10.598 10:50:17 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:10.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.598 --rc genhtml_branch_coverage=1 00:05:10.598 --rc genhtml_function_coverage=1 00:05:10.598 --rc genhtml_legend=1 00:05:10.598 --rc geninfo_all_blocks=1 00:05:10.598 --rc geninfo_unexecuted_blocks=1 00:05:10.598 00:05:10.598 ' 00:05:10.598 10:50:17 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:10.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.598 --rc genhtml_branch_coverage=1 00:05:10.598 --rc genhtml_function_coverage=1 00:05:10.598 --rc genhtml_legend=1 00:05:10.598 --rc geninfo_all_blocks=1 00:05:10.598 --rc geninfo_unexecuted_blocks=1 00:05:10.598 00:05:10.598 ' 00:05:10.598 10:50:17 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:10.598 10:50:17 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:10.598 10:50:17 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:10.598 10:50:17 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:05:10.598 10:50:17 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:10.598 10:50:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.598 ************************************ 00:05:10.599 START TEST event_perf 00:05:10.599 ************************************ 00:05:10.599 10:50:17 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:10.599 Running I/O for 1 seconds...[2024-11-15 10:50:17.333729] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:05:10.599 [2024-11-15 10:50:17.333824] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58218 ] 00:05:10.599 [2024-11-15 10:50:17.510256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:10.858 Running I/O for 1 seconds...[2024-11-15 10:50:17.636738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.858 [2024-11-15 10:50:17.636934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:10.858 [2024-11-15 10:50:17.637030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.858 [2024-11-15 10:50:17.637060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.234 00:05:12.234 lcore 0: 102291 00:05:12.234 lcore 1: 102291 00:05:12.234 lcore 2: 102293 00:05:12.234 lcore 3: 102291 00:05:12.234 done. 00:05:12.234 ************************************ 00:05:12.234 END TEST event_perf 00:05:12.234 ************************************ 00:05:12.234 00:05:12.234 real 0m1.599s 00:05:12.234 user 0m4.354s 00:05:12.234 sys 0m0.120s 00:05:12.234 10:50:18 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:12.234 10:50:18 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:12.234 10:50:18 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:12.234 10:50:18 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:12.234 10:50:18 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:12.234 10:50:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.234 ************************************ 00:05:12.234 START TEST event_reactor 00:05:12.234 ************************************ 00:05:12.234 10:50:18 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:12.234 [2024-11-15 10:50:19.004698] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:05:12.234 [2024-11-15 10:50:19.004804] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58256 ] 00:05:12.493 [2024-11-15 10:50:19.180017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.493 [2024-11-15 10:50:19.296317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.874 test_start 00:05:13.874 oneshot 00:05:13.874 tick 100 00:05:13.874 tick 100 00:05:13.874 tick 250 00:05:13.874 tick 100 00:05:13.874 tick 100 00:05:13.874 tick 100 00:05:13.874 tick 250 00:05:13.874 tick 500 00:05:13.874 tick 100 00:05:13.874 tick 100 00:05:13.874 tick 250 00:05:13.874 tick 100 00:05:13.874 tick 100 00:05:13.874 test_end 00:05:13.874 00:05:13.874 real 0m1.575s 00:05:13.874 user 0m1.385s 00:05:13.874 sys 0m0.081s 00:05:13.874 10:50:20 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:13.874 10:50:20 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:13.874 ************************************ 00:05:13.874 END TEST event_reactor 00:05:13.874 ************************************ 00:05:13.874 10:50:20 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:13.874 10:50:20 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:13.874 10:50:20 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:13.874 10:50:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.874 ************************************ 00:05:13.874 START TEST event_reactor_perf 00:05:13.874 ************************************ 00:05:13.874 10:50:20 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:13.874 [2024-11-15 10:50:20.648566] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:05:13.874 [2024-11-15 10:50:20.648766] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58294 ] 00:05:14.133 [2024-11-15 10:50:20.841131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.133 [2024-11-15 10:50:20.979634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.513 test_start 00:05:15.513 test_end 00:05:15.513 Performance: 332841 events per second 00:05:15.513 00:05:15.513 real 0m1.638s 00:05:15.513 user 0m1.410s 00:05:15.513 sys 0m0.118s 00:05:15.513 10:50:22 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:15.513 10:50:22 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:15.513 ************************************ 00:05:15.513 END TEST event_reactor_perf 00:05:15.513 ************************************ 00:05:15.513 10:50:22 event -- event/event.sh@49 -- # uname -s 00:05:15.513 10:50:22 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:15.513 10:50:22 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:15.513 10:50:22 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:15.513 10:50:22 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:15.513 10:50:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.513 ************************************ 00:05:15.513 START TEST event_scheduler 00:05:15.513 ************************************ 00:05:15.513 10:50:22 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:15.513 * Looking for test storage... 00:05:15.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:15.513 10:50:22 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:15.513 10:50:22 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:15.513 10:50:22 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:15.773 10:50:22 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:15.773 10:50:22 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.773 10:50:22 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.773 10:50:22 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.773 10:50:22 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.773 10:50:22 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.773 10:50:22 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.773 10:50:22 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.773 10:50:22 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.773 10:50:22 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.773 10:50:22 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.773 10:50:22 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.773 10:50:22 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:15.773 10:50:22 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:15.773 10:50:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.773 10:50:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.773 10:50:22 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:15.773 10:50:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:15.773 10:50:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.773 10:50:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:15.773 10:50:22 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.773 10:50:22 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:15.773 10:50:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:15.773 10:50:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.773 10:50:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:15.773 10:50:22 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.773 10:50:22 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.773 10:50:22 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.773 10:50:22 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:15.773 10:50:22 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.773 10:50:22 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:15.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.773 --rc genhtml_branch_coverage=1 00:05:15.773 --rc genhtml_function_coverage=1 00:05:15.773 --rc genhtml_legend=1 00:05:15.773 --rc geninfo_all_blocks=1 00:05:15.773 --rc geninfo_unexecuted_blocks=1 00:05:15.773 00:05:15.773 ' 00:05:15.773 10:50:22 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:15.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.773 --rc genhtml_branch_coverage=1 00:05:15.773 --rc genhtml_function_coverage=1 00:05:15.773 --rc genhtml_legend=1 00:05:15.773 --rc geninfo_all_blocks=1 00:05:15.773 --rc geninfo_unexecuted_blocks=1 00:05:15.773 00:05:15.773 ' 00:05:15.773 10:50:22 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:15.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.773 --rc genhtml_branch_coverage=1 00:05:15.773 --rc genhtml_function_coverage=1 00:05:15.773 --rc genhtml_legend=1 00:05:15.773 --rc geninfo_all_blocks=1 00:05:15.773 --rc geninfo_unexecuted_blocks=1 00:05:15.773 00:05:15.773 ' 00:05:15.773 10:50:22 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:15.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.773 --rc genhtml_branch_coverage=1 00:05:15.773 --rc genhtml_function_coverage=1 00:05:15.773 --rc genhtml_legend=1 00:05:15.773 --rc geninfo_all_blocks=1 00:05:15.773 --rc geninfo_unexecuted_blocks=1 00:05:15.773 00:05:15.773 ' 00:05:15.773 10:50:22 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:15.773 10:50:22 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58370 00:05:15.773 10:50:22 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:15.773 10:50:22 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.773 10:50:22 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58370 00:05:15.773 10:50:22 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58370 ']' 00:05:15.773 10:50:22 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.773 10:50:22 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:15.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.773 10:50:22 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.773 10:50:22 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:15.773 10:50:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:15.773 [2024-11-15 10:50:22.600920] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:05:15.773 [2024-11-15 10:50:22.601084] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58370 ] 00:05:16.052 [2024-11-15 10:50:22.772018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:16.052 [2024-11-15 10:50:22.902939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.052 [2024-11-15 10:50:22.903140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.052 [2024-11-15 10:50:22.903274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.053 [2024-11-15 10:50:22.903423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:16.669 10:50:23 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:16.669 10:50:23 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:05:16.669 10:50:23 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:16.669 10:50:23 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.669 10:50:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:16.669 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:16.669 POWER: Cannot set governor of lcore 0 to userspace 00:05:16.669 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:16.669 POWER: Cannot set governor of lcore 0 to performance 00:05:16.669 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:16.669 POWER: Cannot set governor of lcore 0 to userspace 00:05:16.669 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:16.669 POWER: Cannot set governor of lcore 0 to userspace 00:05:16.669 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:16.669 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:16.669 POWER: Unable to set Power Management Environment for lcore 0 00:05:16.669 [2024-11-15 10:50:23.519879] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:16.669 [2024-11-15 10:50:23.519905] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:16.669 [2024-11-15 10:50:23.519917] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:16.669 [2024-11-15 10:50:23.519939] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:16.669 [2024-11-15 10:50:23.519949] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:16.669 [2024-11-15 10:50:23.519960] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:16.669 10:50:23 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.669 10:50:23 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:16.669 10:50:23 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.669 10:50:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:17.237 [2024-11-15 10:50:23.873199] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:17.237 10:50:23 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.237 10:50:23 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:17.237 10:50:23 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:17.237 10:50:23 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:17.237 10:50:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:17.237 ************************************ 00:05:17.237 START TEST scheduler_create_thread 00:05:17.237 ************************************ 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.237 2 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.237 3 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.237 4 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.237 5 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.237 6 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.237 7 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.237 8 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.237 10:50:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:17.238 10:50:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.238 10:50:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.238 9 00:05:17.238 10:50:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.238 10:50:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:17.238 10:50:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.238 10:50:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.238 10 00:05:17.238 10:50:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.238 10:50:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:17.238 10:50:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.238 10:50:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.617 10:50:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.617 10:50:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:18.617 10:50:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:18.617 10:50:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.617 10:50:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.554 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.554 10:50:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:19.554 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.554 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.122 10:50:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.122 10:50:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:20.122 10:50:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:20.122 10:50:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.122 10:50:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.061 10:50:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.061 00:05:21.061 real 0m3.885s 00:05:21.061 user 0m0.023s 00:05:21.061 sys 0m0.014s 00:05:21.061 10:50:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:21.061 10:50:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.061 ************************************ 00:05:21.061 END TEST scheduler_create_thread 00:05:21.061 ************************************ 00:05:21.061 10:50:27 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:21.061 10:50:27 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58370 00:05:21.061 10:50:27 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58370 ']' 00:05:21.061 10:50:27 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58370 00:05:21.061 10:50:27 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:05:21.061 10:50:27 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:21.061 10:50:27 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58370 00:05:21.061 10:50:27 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:21.061 10:50:27 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:21.061 killing process with pid 58370 00:05:21.061 10:50:27 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58370' 00:05:21.061 10:50:27 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58370 00:05:21.061 10:50:27 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58370 00:05:21.320 [2024-11-15 10:50:28.151176] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:22.727 00:05:22.727 real 0m7.077s 00:05:22.727 user 0m14.821s 00:05:22.727 sys 0m0.528s 00:05:22.727 10:50:29 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:22.727 10:50:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.727 ************************************ 00:05:22.727 END TEST event_scheduler 00:05:22.727 ************************************ 00:05:22.727 10:50:29 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:22.727 10:50:29 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:22.727 10:50:29 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:22.727 10:50:29 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:22.727 10:50:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.727 ************************************ 00:05:22.727 START TEST app_repeat 00:05:22.727 ************************************ 00:05:22.727 10:50:29 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:05:22.727 10:50:29 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.727 10:50:29 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.727 10:50:29 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:22.727 10:50:29 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.727 10:50:29 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:22.727 10:50:29 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:22.727 10:50:29 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:22.727 10:50:29 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58493 00:05:22.727 10:50:29 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:22.727 10:50:29 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.727 Process app_repeat pid: 58493 00:05:22.727 10:50:29 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58493' 00:05:22.727 10:50:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:22.727 spdk_app_start Round 0 00:05:22.727 10:50:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:22.727 10:50:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58493 /var/tmp/spdk-nbd.sock 00:05:22.727 10:50:29 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58493 ']' 00:05:22.727 10:50:29 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.727 10:50:29 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:22.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.727 10:50:29 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.727 10:50:29 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:22.727 10:50:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.727 [2024-11-15 10:50:29.502785] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:05:22.728 [2024-11-15 10:50:29.502879] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58493 ] 00:05:22.987 [2024-11-15 10:50:29.676884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.987 [2024-11-15 10:50:29.796640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.987 [2024-11-15 10:50:29.796675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.554 10:50:30 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:23.554 10:50:30 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:23.554 10:50:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.813 Malloc0 00:05:23.813 10:50:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.073 Malloc1 00:05:24.073 10:50:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.073 10:50:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.073 10:50:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.073 10:50:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:24.073 10:50:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.073 10:50:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:24.073 10:50:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.073 10:50:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.073 10:50:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.073 10:50:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:24.073 10:50:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.073 10:50:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:24.073 10:50:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:24.073 10:50:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:24.073 10:50:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.073 10:50:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:24.331 /dev/nbd0 00:05:24.331 10:50:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:24.331 10:50:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:24.331 10:50:31 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:24.331 10:50:31 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:24.331 10:50:31 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:24.331 10:50:31 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:24.331 10:50:31 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:24.331 10:50:31 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:24.331 10:50:31 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:24.331 10:50:31 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:24.331 10:50:31 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.331 1+0 records in 00:05:24.331 1+0 records out 00:05:24.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418572 s, 9.8 MB/s 00:05:24.331 10:50:31 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.331 10:50:31 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:24.331 10:50:31 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.331 10:50:31 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:24.331 10:50:31 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:24.331 10:50:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.331 10:50:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.331 10:50:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:24.590 /dev/nbd1 00:05:24.590 10:50:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:24.590 10:50:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:24.590 10:50:31 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:24.590 10:50:31 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:24.590 10:50:31 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:24.590 10:50:31 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:24.590 10:50:31 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:24.590 10:50:31 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:24.590 10:50:31 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:24.590 10:50:31 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:24.590 10:50:31 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.590 1+0 records in 00:05:24.590 1+0 records out 00:05:24.590 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310535 s, 13.2 MB/s 00:05:24.590 10:50:31 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.590 10:50:31 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:24.590 10:50:31 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.590 10:50:31 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:24.590 10:50:31 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:24.590 10:50:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.590 10:50:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.590 10:50:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.590 10:50:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.590 10:50:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.850 10:50:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:24.850 { 00:05:24.850 "nbd_device": "/dev/nbd0", 00:05:24.850 "bdev_name": "Malloc0" 00:05:24.850 }, 00:05:24.850 { 00:05:24.850 "nbd_device": "/dev/nbd1", 00:05:24.850 "bdev_name": "Malloc1" 00:05:24.850 } 00:05:24.850 ]' 00:05:24.850 10:50:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:24.850 { 00:05:24.850 "nbd_device": "/dev/nbd0", 00:05:24.850 "bdev_name": "Malloc0" 00:05:24.850 }, 00:05:24.850 { 00:05:24.850 "nbd_device": "/dev/nbd1", 00:05:24.850 "bdev_name": "Malloc1" 00:05:24.850 } 00:05:24.850 ]' 00:05:24.850 10:50:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:25.110 /dev/nbd1' 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:25.110 /dev/nbd1' 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:25.110 256+0 records in 00:05:25.110 256+0 records out 00:05:25.110 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137788 s, 76.1 MB/s 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:25.110 256+0 records in 00:05:25.110 256+0 records out 00:05:25.110 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019906 s, 52.7 MB/s 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:25.110 256+0 records in 00:05:25.110 256+0 records out 00:05:25.110 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290251 s, 36.1 MB/s 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.110 10:50:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:25.369 10:50:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:25.369 10:50:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:25.369 10:50:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:25.369 10:50:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.369 10:50:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.370 10:50:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:25.370 10:50:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.370 10:50:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.370 10:50:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.370 10:50:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:25.701 10:50:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:25.701 10:50:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:25.701 10:50:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:25.701 10:50:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.701 10:50:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.701 10:50:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:25.701 10:50:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.701 10:50:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.701 10:50:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.701 10:50:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.701 10:50:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.701 10:50:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:25.701 10:50:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.701 10:50:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:25.961 10:50:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:25.961 10:50:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:25.961 10:50:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.961 10:50:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:25.961 10:50:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:25.961 10:50:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:25.961 10:50:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:25.961 10:50:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:25.961 10:50:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:25.961 10:50:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:26.221 10:50:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:27.603 [2024-11-15 10:50:34.249936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.603 [2024-11-15 10:50:34.362863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.603 [2024-11-15 10:50:34.362868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.863 [2024-11-15 10:50:34.551069] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:27.863 [2024-11-15 10:50:34.551211] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:29.308 10:50:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:29.308 spdk_app_start Round 1 00:05:29.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:29.308 10:50:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:29.308 10:50:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58493 /var/tmp/spdk-nbd.sock 00:05:29.308 10:50:36 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58493 ']' 00:05:29.308 10:50:36 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:29.308 10:50:36 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:29.308 10:50:36 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:29.308 10:50:36 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:29.308 10:50:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.568 10:50:36 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:29.568 10:50:36 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:29.568 10:50:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.827 Malloc0 00:05:29.827 10:50:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.087 Malloc1 00:05:30.087 10:50:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.087 10:50:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.087 10:50:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.087 10:50:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:30.087 10:50:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.087 10:50:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:30.087 10:50:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.087 10:50:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.087 10:50:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.087 10:50:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:30.087 10:50:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.087 10:50:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:30.087 10:50:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:30.087 10:50:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:30.087 10:50:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.087 10:50:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:30.346 /dev/nbd0 00:05:30.346 10:50:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.346 10:50:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.346 10:50:37 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:30.346 10:50:37 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:30.346 10:50:37 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:30.346 10:50:37 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:30.346 10:50:37 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:30.346 10:50:37 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:30.346 10:50:37 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:30.346 10:50:37 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:30.346 10:50:37 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.346 1+0 records in 00:05:30.346 1+0 records out 00:05:30.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000492072 s, 8.3 MB/s 00:05:30.346 10:50:37 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.346 10:50:37 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:30.346 10:50:37 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.346 10:50:37 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:30.346 10:50:37 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:30.346 10:50:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.346 10:50:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.346 10:50:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.605 /dev/nbd1 00:05:30.605 10:50:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.605 10:50:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.605 10:50:37 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:30.605 10:50:37 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:30.605 10:50:37 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:30.605 10:50:37 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:30.605 10:50:37 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:30.605 10:50:37 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:30.605 10:50:37 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:30.605 10:50:37 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:30.605 10:50:37 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.605 1+0 records in 00:05:30.605 1+0 records out 00:05:30.605 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248844 s, 16.5 MB/s 00:05:30.606 10:50:37 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.606 10:50:37 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:30.606 10:50:37 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.606 10:50:37 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:30.606 10:50:37 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:30.606 10:50:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.606 10:50:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.606 10:50:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.606 10:50:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.606 10:50:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.864 10:50:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:30.864 { 00:05:30.864 "nbd_device": "/dev/nbd0", 00:05:30.864 "bdev_name": "Malloc0" 00:05:30.864 }, 00:05:30.864 { 00:05:30.864 "nbd_device": "/dev/nbd1", 00:05:30.864 "bdev_name": "Malloc1" 00:05:30.864 } 00:05:30.864 ]' 00:05:30.864 10:50:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:30.864 { 00:05:30.864 "nbd_device": "/dev/nbd0", 00:05:30.864 "bdev_name": "Malloc0" 00:05:30.864 }, 00:05:30.864 { 00:05:30.864 "nbd_device": "/dev/nbd1", 00:05:30.864 "bdev_name": "Malloc1" 00:05:30.864 } 00:05:30.864 ]' 00:05:30.864 10:50:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.864 10:50:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:30.864 /dev/nbd1' 00:05:30.864 10:50:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:30.864 /dev/nbd1' 00:05:30.864 10:50:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.864 10:50:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:30.864 10:50:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:30.864 10:50:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:30.864 10:50:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:30.864 10:50:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:30.864 10:50:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.864 10:50:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.864 10:50:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:30.864 10:50:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.864 10:50:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:30.864 10:50:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:30.865 256+0 records in 00:05:30.865 256+0 records out 00:05:30.865 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134333 s, 78.1 MB/s 00:05:30.865 10:50:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.865 10:50:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:30.865 256+0 records in 00:05:30.865 256+0 records out 00:05:30.865 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239016 s, 43.9 MB/s 00:05:30.865 10:50:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.865 10:50:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:31.124 256+0 records in 00:05:31.124 256+0 records out 00:05:31.124 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251015 s, 41.8 MB/s 00:05:31.124 10:50:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:31.124 10:50:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.124 10:50:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.124 10:50:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:31.124 10:50:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:31.124 10:50:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:31.124 10:50:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:31.124 10:50:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.124 10:50:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:31.124 10:50:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.124 10:50:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:31.124 10:50:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:31.124 10:50:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:31.124 10:50:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.124 10:50:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.124 10:50:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:31.124 10:50:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:31.124 10:50:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.124 10:50:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.383 10:50:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.383 10:50:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.383 10:50:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.383 10:50:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.383 10:50:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.383 10:50:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.383 10:50:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.383 10:50:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.383 10:50:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.383 10:50:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.644 10:50:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.644 10:50:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.644 10:50:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.644 10:50:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.644 10:50:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.644 10:50:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.644 10:50:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.644 10:50:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.644 10:50:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.644 10:50:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.644 10:50:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.644 10:50:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:31.644 10:50:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:31.644 10:50:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.905 10:50:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:31.905 10:50:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:31.905 10:50:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.905 10:50:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:31.905 10:50:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:31.905 10:50:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:31.905 10:50:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:31.905 10:50:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:31.906 10:50:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:31.906 10:50:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:32.165 10:50:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:33.539 [2024-11-15 10:50:40.196833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.539 [2024-11-15 10:50:40.319033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.539 [2024-11-15 10:50:40.319054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.798 [2024-11-15 10:50:40.524648] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:33.798 [2024-11-15 10:50:40.524739] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.214 spdk_app_start Round 2 00:05:35.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.214 10:50:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:35.214 10:50:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:35.214 10:50:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58493 /var/tmp/spdk-nbd.sock 00:05:35.214 10:50:42 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58493 ']' 00:05:35.214 10:50:42 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.214 10:50:42 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:35.214 10:50:42 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.214 10:50:42 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:35.214 10:50:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.473 10:50:42 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:35.473 10:50:42 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:35.473 10:50:42 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.732 Malloc0 00:05:35.732 10:50:42 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.990 Malloc1 00:05:36.250 10:50:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.250 10:50:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.250 10:50:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.250 10:50:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:36.250 10:50:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.250 10:50:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:36.250 10:50:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.250 10:50:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.250 10:50:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.250 10:50:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:36.250 10:50:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.250 10:50:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:36.250 10:50:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:36.250 10:50:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:36.250 10:50:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.250 10:50:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.250 /dev/nbd0 00:05:36.509 10:50:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.509 10:50:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.509 10:50:43 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:36.509 10:50:43 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:36.509 10:50:43 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:36.509 10:50:43 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:36.509 10:50:43 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:36.509 10:50:43 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:36.509 10:50:43 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:36.509 10:50:43 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:36.509 10:50:43 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.509 1+0 records in 00:05:36.509 1+0 records out 00:05:36.509 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267748 s, 15.3 MB/s 00:05:36.509 10:50:43 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.509 10:50:43 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:36.509 10:50:43 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.509 10:50:43 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:36.509 10:50:43 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:36.510 10:50:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.510 10:50:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.510 10:50:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.769 /dev/nbd1 00:05:36.769 10:50:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.769 10:50:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.769 10:50:43 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:36.769 10:50:43 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:36.769 10:50:43 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:36.769 10:50:43 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:36.769 10:50:43 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:36.769 10:50:43 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:36.769 10:50:43 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:36.769 10:50:43 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:36.769 10:50:43 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.769 1+0 records in 00:05:36.769 1+0 records out 00:05:36.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052362 s, 7.8 MB/s 00:05:36.769 10:50:43 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.769 10:50:43 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:36.769 10:50:43 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.769 10:50:43 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:36.769 10:50:43 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:36.769 10:50:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.769 10:50:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.769 10:50:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.769 10:50:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.769 10:50:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:37.028 { 00:05:37.028 "nbd_device": "/dev/nbd0", 00:05:37.028 "bdev_name": "Malloc0" 00:05:37.028 }, 00:05:37.028 { 00:05:37.028 "nbd_device": "/dev/nbd1", 00:05:37.028 "bdev_name": "Malloc1" 00:05:37.028 } 00:05:37.028 ]' 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:37.028 { 00:05:37.028 "nbd_device": "/dev/nbd0", 00:05:37.028 "bdev_name": "Malloc0" 00:05:37.028 }, 00:05:37.028 { 00:05:37.028 "nbd_device": "/dev/nbd1", 00:05:37.028 "bdev_name": "Malloc1" 00:05:37.028 } 00:05:37.028 ]' 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:37.028 /dev/nbd1' 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:37.028 /dev/nbd1' 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:37.028 256+0 records in 00:05:37.028 256+0 records out 00:05:37.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140301 s, 74.7 MB/s 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:37.028 256+0 records in 00:05:37.028 256+0 records out 00:05:37.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0279246 s, 37.6 MB/s 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:37.028 256+0 records in 00:05:37.028 256+0 records out 00:05:37.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267148 s, 39.3 MB/s 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:37.028 10:50:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.029 10:50:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:37.287 10:50:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.288 10:50:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:37.288 10:50:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.288 10:50:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.288 10:50:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:37.288 10:50:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:37.288 10:50:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.288 10:50:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:37.559 10:50:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:37.559 10:50:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:37.559 10:50:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:37.559 10:50:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.559 10:50:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.559 10:50:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:37.559 10:50:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.559 10:50:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.559 10:50:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.559 10:50:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:37.818 10:50:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:37.818 10:50:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:37.818 10:50:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:37.818 10:50:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.818 10:50:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.818 10:50:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:37.818 10:50:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.818 10:50:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.818 10:50:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.818 10:50:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.818 10:50:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.077 10:50:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:38.077 10:50:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:38.077 10:50:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.077 10:50:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:38.077 10:50:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.077 10:50:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:38.077 10:50:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:38.077 10:50:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:38.077 10:50:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:38.077 10:50:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:38.077 10:50:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:38.077 10:50:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:38.077 10:50:44 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:38.646 10:50:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:40.023 [2024-11-15 10:50:46.608701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.023 [2024-11-15 10:50:46.726718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.023 [2024-11-15 10:50:46.726719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.023 [2024-11-15 10:50:46.924547] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:40.023 [2024-11-15 10:50:46.924665] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:41.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.437 10:50:48 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58493 /var/tmp/spdk-nbd.sock 00:05:41.437 10:50:48 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58493 ']' 00:05:41.437 10:50:48 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.437 10:50:48 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:41.437 10:50:48 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.437 10:50:48 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:41.437 10:50:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.697 10:50:48 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:41.697 10:50:48 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:41.697 10:50:48 event.app_repeat -- event/event.sh@39 -- # killprocess 58493 00:05:41.697 10:50:48 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58493 ']' 00:05:41.697 10:50:48 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58493 00:05:41.697 10:50:48 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:05:41.957 10:50:48 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:41.957 10:50:48 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58493 00:05:41.957 killing process with pid 58493 00:05:41.957 10:50:48 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:41.957 10:50:48 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:41.957 10:50:48 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58493' 00:05:41.957 10:50:48 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58493 00:05:41.957 10:50:48 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58493 00:05:43.395 spdk_app_start is called in Round 0. 00:05:43.395 Shutdown signal received, stop current app iteration 00:05:43.395 Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 reinitialization... 00:05:43.395 spdk_app_start is called in Round 1. 00:05:43.395 Shutdown signal received, stop current app iteration 00:05:43.395 Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 reinitialization... 00:05:43.395 spdk_app_start is called in Round 2. 00:05:43.395 Shutdown signal received, stop current app iteration 00:05:43.395 Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 reinitialization... 00:05:43.395 spdk_app_start is called in Round 3. 00:05:43.395 Shutdown signal received, stop current app iteration 00:05:43.395 ************************************ 00:05:43.395 END TEST app_repeat 00:05:43.395 ************************************ 00:05:43.395 10:50:49 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:43.395 10:50:49 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:43.395 00:05:43.395 real 0m20.503s 00:05:43.395 user 0m44.316s 00:05:43.395 sys 0m2.945s 00:05:43.395 10:50:49 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:43.395 10:50:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.395 10:50:49 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:43.395 10:50:49 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:43.395 10:50:49 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:43.395 10:50:49 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:43.395 10:50:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.395 ************************************ 00:05:43.395 START TEST cpu_locks 00:05:43.395 ************************************ 00:05:43.395 10:50:49 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:43.395 * Looking for test storage... 00:05:43.395 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:43.395 10:50:50 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:43.395 10:50:50 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:43.395 10:50:50 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:43.395 10:50:50 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:43.395 10:50:50 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.395 10:50:50 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.395 10:50:50 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.395 10:50:50 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.395 10:50:50 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.395 10:50:50 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.395 10:50:50 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.395 10:50:50 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.395 10:50:50 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.395 10:50:50 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.395 10:50:50 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.395 10:50:50 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:43.395 10:50:50 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:43.395 10:50:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.395 10:50:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.395 10:50:50 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:43.395 10:50:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:43.395 10:50:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.395 10:50:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:43.395 10:50:50 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.395 10:50:50 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:43.395 10:50:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:43.395 10:50:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.395 10:50:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:43.395 10:50:50 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.395 10:50:50 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.395 10:50:50 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.395 10:50:50 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:43.395 10:50:50 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.395 10:50:50 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:43.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.395 --rc genhtml_branch_coverage=1 00:05:43.395 --rc genhtml_function_coverage=1 00:05:43.395 --rc genhtml_legend=1 00:05:43.395 --rc geninfo_all_blocks=1 00:05:43.395 --rc geninfo_unexecuted_blocks=1 00:05:43.395 00:05:43.395 ' 00:05:43.395 10:50:50 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:43.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.395 --rc genhtml_branch_coverage=1 00:05:43.395 --rc genhtml_function_coverage=1 00:05:43.395 --rc genhtml_legend=1 00:05:43.395 --rc geninfo_all_blocks=1 00:05:43.395 --rc geninfo_unexecuted_blocks=1 00:05:43.395 00:05:43.395 ' 00:05:43.395 10:50:50 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:43.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.395 --rc genhtml_branch_coverage=1 00:05:43.395 --rc genhtml_function_coverage=1 00:05:43.395 --rc genhtml_legend=1 00:05:43.395 --rc geninfo_all_blocks=1 00:05:43.395 --rc geninfo_unexecuted_blocks=1 00:05:43.395 00:05:43.395 ' 00:05:43.395 10:50:50 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:43.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.395 --rc genhtml_branch_coverage=1 00:05:43.395 --rc genhtml_function_coverage=1 00:05:43.395 --rc genhtml_legend=1 00:05:43.395 --rc geninfo_all_blocks=1 00:05:43.395 --rc geninfo_unexecuted_blocks=1 00:05:43.395 00:05:43.395 ' 00:05:43.395 10:50:50 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:43.395 10:50:50 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:43.395 10:50:50 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:43.395 10:50:50 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:43.395 10:50:50 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:43.395 10:50:50 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:43.395 10:50:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.395 ************************************ 00:05:43.395 START TEST default_locks 00:05:43.395 ************************************ 00:05:43.395 10:50:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:05:43.395 10:50:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58953 00:05:43.395 10:50:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.395 10:50:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58953 00:05:43.395 10:50:50 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58953 ']' 00:05:43.395 10:50:50 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.395 10:50:50 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:43.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.395 10:50:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.395 10:50:50 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:43.395 10:50:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.395 [2024-11-15 10:50:50.289924] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:05:43.395 [2024-11-15 10:50:50.290057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58953 ] 00:05:43.653 [2024-11-15 10:50:50.484248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.912 [2024-11-15 10:50:50.649448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.847 10:50:51 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:44.847 10:50:51 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:05:44.847 10:50:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58953 00:05:44.847 10:50:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58953 00:05:44.847 10:50:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.106 10:50:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58953 00:05:45.106 10:50:51 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 58953 ']' 00:05:45.106 10:50:51 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 58953 00:05:45.106 10:50:51 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:05:45.106 10:50:51 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:45.106 10:50:51 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58953 00:05:45.106 10:50:51 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:45.106 10:50:51 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:45.106 killing process with pid 58953 00:05:45.106 10:50:51 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58953' 00:05:45.106 10:50:51 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 58953 00:05:45.106 10:50:51 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 58953 00:05:47.661 10:50:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58953 00:05:47.661 10:50:54 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:47.661 10:50:54 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58953 00:05:47.661 10:50:54 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:47.661 10:50:54 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.661 10:50:54 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:47.661 10:50:54 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.661 10:50:54 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58953 00:05:47.661 10:50:54 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58953 ']' 00:05:47.661 10:50:54 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.661 10:50:54 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:47.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.661 10:50:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.661 10:50:54 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:47.661 10:50:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.661 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58953) - No such process 00:05:47.661 ERROR: process (pid: 58953) is no longer running 00:05:47.661 10:50:54 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:47.661 10:50:54 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:05:47.661 10:50:54 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:47.661 10:50:54 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:47.661 10:50:54 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:47.661 10:50:54 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:47.661 10:50:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:47.661 10:50:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:47.661 10:50:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:47.661 10:50:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:47.661 00:05:47.661 real 0m4.265s 00:05:47.661 user 0m4.248s 00:05:47.661 sys 0m0.635s 00:05:47.661 10:50:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:47.661 10:50:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.661 ************************************ 00:05:47.661 END TEST default_locks 00:05:47.661 ************************************ 00:05:47.661 10:50:54 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:47.661 10:50:54 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:47.661 10:50:54 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:47.661 10:50:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.661 ************************************ 00:05:47.661 START TEST default_locks_via_rpc 00:05:47.661 ************************************ 00:05:47.661 10:50:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:05:47.661 10:50:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59028 00:05:47.661 10:50:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59028 00:05:47.661 10:50:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.661 10:50:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59028 ']' 00:05:47.661 10:50:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.661 10:50:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:47.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.661 10:50:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.661 10:50:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:47.661 10:50:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.920 [2024-11-15 10:50:54.613218] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:05:47.920 [2024-11-15 10:50:54.613359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59028 ] 00:05:47.920 [2024-11-15 10:50:54.791358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.178 [2024-11-15 10:50:54.920216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.142 10:50:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:49.142 10:50:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:49.142 10:50:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:49.142 10:50:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.142 10:50:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.142 10:50:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.142 10:50:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:49.142 10:50:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:49.142 10:50:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:49.142 10:50:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:49.142 10:50:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:49.142 10:50:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.142 10:50:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.142 10:50:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.142 10:50:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59028 00:05:49.142 10:50:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59028 00:05:49.142 10:50:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.711 10:50:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59028 00:05:49.711 10:50:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 59028 ']' 00:05:49.711 10:50:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 59028 00:05:49.711 10:50:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:05:49.711 10:50:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:49.711 10:50:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59028 00:05:49.711 10:50:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:49.711 10:50:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:49.711 killing process with pid 59028 00:05:49.711 10:50:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59028' 00:05:49.711 10:50:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 59028 00:05:49.711 10:50:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 59028 00:05:52.245 00:05:52.245 real 0m4.275s 00:05:52.245 user 0m4.244s 00:05:52.245 sys 0m0.689s 00:05:52.245 10:50:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:52.245 10:50:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.245 ************************************ 00:05:52.245 END TEST default_locks_via_rpc 00:05:52.245 ************************************ 00:05:52.245 10:50:58 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:52.245 10:50:58 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:52.245 10:50:58 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:52.245 10:50:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.245 ************************************ 00:05:52.245 START TEST non_locking_app_on_locked_coremask 00:05:52.245 ************************************ 00:05:52.245 10:50:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:05:52.245 10:50:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.245 10:50:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59102 00:05:52.245 10:50:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59102 /var/tmp/spdk.sock 00:05:52.245 10:50:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59102 ']' 00:05:52.245 10:50:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.245 10:50:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:52.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.245 10:50:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.245 10:50:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:52.245 10:50:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.245 [2024-11-15 10:50:58.948328] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:05:52.245 [2024-11-15 10:50:58.948497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59102 ] 00:05:52.245 [2024-11-15 10:50:59.133253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.505 [2024-11-15 10:50:59.257961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.443 10:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:53.443 10:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:53.443 10:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:53.443 10:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59128 00:05:53.443 10:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59128 /var/tmp/spdk2.sock 00:05:53.443 10:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59128 ']' 00:05:53.443 10:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.443 10:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:53.443 10:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.443 10:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:53.443 10:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.443 [2024-11-15 10:51:00.305068] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:05:53.443 [2024-11-15 10:51:00.305199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59128 ] 00:05:53.703 [2024-11-15 10:51:00.482482] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:53.703 [2024-11-15 10:51:00.482570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.961 [2024-11-15 10:51:00.769452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.498 10:51:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:56.498 10:51:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:56.498 10:51:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59102 00:05:56.498 10:51:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59102 00:05:56.498 10:51:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.757 10:51:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59102 00:05:56.757 10:51:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59102 ']' 00:05:57.015 10:51:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59102 00:05:57.015 10:51:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:57.015 10:51:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:57.015 10:51:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59102 00:05:57.015 10:51:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:57.015 10:51:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:57.015 killing process with pid 59102 00:05:57.015 10:51:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59102' 00:05:57.015 10:51:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59102 00:05:57.015 10:51:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59102 00:06:02.292 10:51:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59128 00:06:02.292 10:51:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59128 ']' 00:06:02.292 10:51:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59128 00:06:02.292 10:51:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:02.292 10:51:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:02.292 10:51:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59128 00:06:02.292 10:51:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:02.292 10:51:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:02.292 10:51:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59128' 00:06:02.292 killing process with pid 59128 00:06:02.292 10:51:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59128 00:06:02.292 10:51:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59128 00:06:04.821 00:06:04.821 real 0m12.462s 00:06:04.821 user 0m12.779s 00:06:04.821 sys 0m1.373s 00:06:04.821 10:51:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:04.821 10:51:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.821 ************************************ 00:06:04.821 END TEST non_locking_app_on_locked_coremask 00:06:04.821 ************************************ 00:06:04.821 10:51:11 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:04.821 10:51:11 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:04.821 10:51:11 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:04.821 10:51:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.821 ************************************ 00:06:04.821 START TEST locking_app_on_unlocked_coremask 00:06:04.821 ************************************ 00:06:04.821 10:51:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:06:04.821 10:51:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59280 00:06:04.821 10:51:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:04.821 10:51:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59280 /var/tmp/spdk.sock 00:06:04.821 10:51:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59280 ']' 00:06:04.821 10:51:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.821 10:51:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:04.821 10:51:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.821 10:51:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:04.821 10:51:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.821 [2024-11-15 10:51:11.440810] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:06:04.821 [2024-11-15 10:51:11.440971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59280 ] 00:06:04.821 [2024-11-15 10:51:11.609945] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:04.821 [2024-11-15 10:51:11.610035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.078 [2024-11-15 10:51:11.759394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.011 10:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:06.011 10:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:06.011 10:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59302 00:06:06.011 10:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:06.011 10:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59302 /var/tmp/spdk2.sock 00:06:06.011 10:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59302 ']' 00:06:06.011 10:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.011 10:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:06.011 10:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.011 10:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:06.011 10:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.011 [2024-11-15 10:51:12.852586] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:06:06.011 [2024-11-15 10:51:12.852771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59302 ] 00:06:06.268 [2024-11-15 10:51:13.054868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.526 [2024-11-15 10:51:13.351827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.096 10:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:09.096 10:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:09.096 10:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59302 00:06:09.096 10:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.096 10:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59302 00:06:09.097 10:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59280 00:06:09.097 10:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59280 ']' 00:06:09.097 10:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59280 00:06:09.097 10:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:09.097 10:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:09.097 10:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59280 00:06:09.097 killing process with pid 59280 00:06:09.097 10:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:09.097 10:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:09.097 10:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59280' 00:06:09.097 10:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59280 00:06:09.097 10:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59280 00:06:14.398 10:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59302 00:06:14.398 10:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59302 ']' 00:06:14.398 10:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59302 00:06:14.398 10:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:14.398 10:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:14.398 10:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59302 00:06:14.398 killing process with pid 59302 00:06:14.398 10:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:14.398 10:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:14.398 10:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59302' 00:06:14.398 10:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59302 00:06:14.398 10:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59302 00:06:16.933 ************************************ 00:06:16.933 END TEST locking_app_on_unlocked_coremask 00:06:16.933 00:06:16.933 real 0m12.070s 00:06:16.933 user 0m12.567s 00:06:16.933 sys 0m1.223s 00:06:16.933 10:51:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:16.933 10:51:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.933 ************************************ 00:06:16.933 10:51:23 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:16.933 10:51:23 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:16.933 10:51:23 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:16.933 10:51:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.933 ************************************ 00:06:16.933 START TEST locking_app_on_locked_coremask 00:06:16.933 ************************************ 00:06:16.933 10:51:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:06:16.933 10:51:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59455 00:06:16.933 10:51:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.933 10:51:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59455 /var/tmp/spdk.sock 00:06:16.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.933 10:51:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59455 ']' 00:06:16.933 10:51:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.933 10:51:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:16.933 10:51:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.933 10:51:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:16.933 10:51:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.933 [2024-11-15 10:51:23.575447] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:06:16.933 [2024-11-15 10:51:23.575595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59455 ] 00:06:16.933 [2024-11-15 10:51:23.756168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.192 [2024-11-15 10:51:23.887763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.131 10:51:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:18.131 10:51:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:18.131 10:51:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:18.131 10:51:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59476 00:06:18.131 10:51:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59476 /var/tmp/spdk2.sock 00:06:18.131 10:51:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:18.131 10:51:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59476 /var/tmp/spdk2.sock 00:06:18.131 10:51:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:18.131 10:51:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.131 10:51:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:18.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.131 10:51:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.131 10:51:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59476 /var/tmp/spdk2.sock 00:06:18.131 10:51:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59476 ']' 00:06:18.131 10:51:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.131 10:51:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:18.131 10:51:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.132 10:51:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:18.132 10:51:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.132 [2024-11-15 10:51:24.918978] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:06:18.132 [2024-11-15 10:51:24.919099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59476 ] 00:06:18.391 [2024-11-15 10:51:25.100721] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59455 has claimed it. 00:06:18.391 [2024-11-15 10:51:25.100832] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:18.650 ERROR: process (pid: 59476) is no longer running 00:06:18.650 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59476) - No such process 00:06:18.650 10:51:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:18.650 10:51:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:18.650 10:51:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:18.650 10:51:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:18.650 10:51:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:18.650 10:51:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:18.650 10:51:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59455 00:06:18.650 10:51:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59455 00:06:18.650 10:51:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.220 10:51:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59455 00:06:19.220 10:51:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59455 ']' 00:06:19.220 10:51:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59455 00:06:19.220 10:51:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:19.220 10:51:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:19.220 10:51:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59455 00:06:19.220 10:51:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:19.220 10:51:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:19.220 killing process with pid 59455 00:06:19.220 10:51:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59455' 00:06:19.220 10:51:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59455 00:06:19.220 10:51:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59455 00:06:21.761 00:06:21.761 real 0m5.182s 00:06:21.761 user 0m5.415s 00:06:21.761 sys 0m0.853s 00:06:21.761 ************************************ 00:06:21.761 END TEST locking_app_on_locked_coremask 00:06:21.761 ************************************ 00:06:21.761 10:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:21.761 10:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.021 10:51:28 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:22.021 10:51:28 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:22.021 10:51:28 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:22.021 10:51:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.021 ************************************ 00:06:22.021 START TEST locking_overlapped_coremask 00:06:22.021 ************************************ 00:06:22.021 10:51:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:06:22.021 10:51:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:22.021 10:51:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59546 00:06:22.021 10:51:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59546 /var/tmp/spdk.sock 00:06:22.021 10:51:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59546 ']' 00:06:22.021 10:51:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.021 10:51:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:22.021 10:51:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.021 10:51:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:22.021 10:51:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.021 [2024-11-15 10:51:28.813496] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:06:22.021 [2024-11-15 10:51:28.814099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59546 ] 00:06:22.280 [2024-11-15 10:51:28.987071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.280 [2024-11-15 10:51:29.110686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.280 [2024-11-15 10:51:29.110708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.280 [2024-11-15 10:51:29.110715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.217 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:23.217 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:23.217 10:51:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59570 00:06:23.217 10:51:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:23.217 10:51:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59570 /var/tmp/spdk2.sock 00:06:23.217 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:23.217 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59570 /var/tmp/spdk2.sock 00:06:23.217 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:23.217 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.217 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:23.217 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.217 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59570 /var/tmp/spdk2.sock 00:06:23.217 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59570 ']' 00:06:23.217 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.217 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:23.217 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.217 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:23.217 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.476 [2024-11-15 10:51:30.157365] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:06:23.476 [2024-11-15 10:51:30.157907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59570 ] 00:06:23.476 [2024-11-15 10:51:30.338470] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59546 has claimed it. 00:06:23.476 [2024-11-15 10:51:30.338548] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:24.099 ERROR: process (pid: 59570) is no longer running 00:06:24.099 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59570) - No such process 00:06:24.099 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:24.099 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:24.099 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:24.099 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:24.099 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:24.099 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:24.099 10:51:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:24.099 10:51:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:24.099 10:51:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:24.099 10:51:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:24.099 10:51:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59546 00:06:24.099 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 59546 ']' 00:06:24.099 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 59546 00:06:24.099 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:06:24.099 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:24.099 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59546 00:06:24.099 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:24.099 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:24.099 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59546' 00:06:24.099 killing process with pid 59546 00:06:24.099 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 59546 00:06:24.099 10:51:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 59546 00:06:26.637 00:06:26.637 real 0m4.611s 00:06:26.637 user 0m12.549s 00:06:26.637 sys 0m0.618s 00:06:26.637 10:51:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:26.637 10:51:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.637 ************************************ 00:06:26.637 END TEST locking_overlapped_coremask 00:06:26.637 ************************************ 00:06:26.637 10:51:33 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:26.637 10:51:33 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:26.637 10:51:33 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:26.637 10:51:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.637 ************************************ 00:06:26.637 START TEST locking_overlapped_coremask_via_rpc 00:06:26.637 ************************************ 00:06:26.637 10:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:06:26.637 10:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:26.637 10:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59634 00:06:26.637 10:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59634 /var/tmp/spdk.sock 00:06:26.637 10:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59634 ']' 00:06:26.638 10:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.638 10:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:26.638 10:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.638 10:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:26.638 10:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.638 [2024-11-15 10:51:33.466413] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:06:26.638 [2024-11-15 10:51:33.466618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59634 ] 00:06:26.897 [2024-11-15 10:51:33.643226] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:26.897 [2024-11-15 10:51:33.643391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.897 [2024-11-15 10:51:33.773775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.897 [2024-11-15 10:51:33.773914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.897 [2024-11-15 10:51:33.773951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.836 10:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:27.836 10:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:27.836 10:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59652 00:06:27.836 10:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59652 /var/tmp/spdk2.sock 00:06:27.836 10:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59652 ']' 00:06:27.836 10:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.836 10:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:27.836 10:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.836 10:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:27.836 10:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:27.836 10:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.096 [2024-11-15 10:51:34.822894] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:06:28.096 [2024-11-15 10:51:34.823030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59652 ] 00:06:28.096 [2024-11-15 10:51:35.004281] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.096 [2024-11-15 10:51:35.004353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.355 [2024-11-15 10:51:35.273232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.355 [2024-11-15 10:51:35.273377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.355 [2024-11-15 10:51:35.273411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.931 [2024-11-15 10:51:37.476507] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59634 has claimed it. 00:06:30.931 request: 00:06:30.931 { 00:06:30.931 "method": "framework_enable_cpumask_locks", 00:06:30.931 "req_id": 1 00:06:30.931 } 00:06:30.931 Got JSON-RPC error response 00:06:30.931 response: 00:06:30.931 { 00:06:30.931 "code": -32603, 00:06:30.931 "message": "Failed to claim CPU core: 2" 00:06:30.931 } 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59634 /var/tmp/spdk.sock 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59634 ']' 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59652 /var/tmp/spdk2.sock 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59652 ']' 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:30.931 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.191 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:31.191 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:31.191 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:31.191 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:31.191 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:31.191 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:31.191 00:06:31.191 real 0m4.576s 00:06:31.191 user 0m1.417s 00:06:31.191 sys 0m0.206s 00:06:31.191 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:31.191 10:51:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.191 ************************************ 00:06:31.191 END TEST locking_overlapped_coremask_via_rpc 00:06:31.191 ************************************ 00:06:31.191 10:51:37 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:31.191 10:51:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59634 ]] 00:06:31.191 10:51:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59634 00:06:31.191 10:51:37 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59634 ']' 00:06:31.191 10:51:37 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59634 00:06:31.191 10:51:37 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:31.191 10:51:38 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:31.191 10:51:38 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59634 00:06:31.191 killing process with pid 59634 00:06:31.191 10:51:38 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:31.191 10:51:38 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:31.191 10:51:38 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59634' 00:06:31.191 10:51:38 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59634 00:06:31.191 10:51:38 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59634 00:06:34.486 10:51:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59652 ]] 00:06:34.486 10:51:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59652 00:06:34.486 10:51:40 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59652 ']' 00:06:34.486 10:51:40 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59652 00:06:34.486 10:51:40 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:34.486 10:51:40 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:34.486 10:51:40 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59652 00:06:34.486 killing process with pid 59652 00:06:34.486 10:51:40 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:34.486 10:51:40 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:34.486 10:51:40 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59652' 00:06:34.486 10:51:40 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59652 00:06:34.486 10:51:40 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59652 00:06:36.393 10:51:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:36.393 10:51:43 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:36.393 10:51:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59634 ]] 00:06:36.393 10:51:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59634 00:06:36.393 10:51:43 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59634 ']' 00:06:36.393 10:51:43 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59634 00:06:36.393 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59634) - No such process 00:06:36.393 Process with pid 59634 is not found 00:06:36.393 10:51:43 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59634 is not found' 00:06:36.393 10:51:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59652 ]] 00:06:36.393 10:51:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59652 00:06:36.393 10:51:43 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59652 ']' 00:06:36.393 10:51:43 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59652 00:06:36.393 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59652) - No such process 00:06:36.393 Process with pid 59652 is not found 00:06:36.393 10:51:43 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59652 is not found' 00:06:36.393 10:51:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:36.393 00:06:36.393 real 0m53.211s 00:06:36.393 user 1m31.244s 00:06:36.393 sys 0m6.717s 00:06:36.393 10:51:43 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:36.393 10:51:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.393 ************************************ 00:06:36.393 END TEST cpu_locks 00:06:36.393 ************************************ 00:06:36.393 00:06:36.393 real 1m26.210s 00:06:36.393 user 2m37.767s 00:06:36.393 sys 0m10.903s 00:06:36.393 10:51:43 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:36.393 10:51:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.393 ************************************ 00:06:36.393 END TEST event 00:06:36.393 ************************************ 00:06:36.393 10:51:43 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:36.393 10:51:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:36.393 10:51:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:36.393 10:51:43 -- common/autotest_common.sh@10 -- # set +x 00:06:36.393 ************************************ 00:06:36.393 START TEST thread 00:06:36.393 ************************************ 00:06:36.393 10:51:43 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:36.651 * Looking for test storage... 00:06:36.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:36.651 10:51:43 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:36.651 10:51:43 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:06:36.651 10:51:43 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:36.651 10:51:43 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:36.651 10:51:43 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.651 10:51:43 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.651 10:51:43 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.651 10:51:43 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.651 10:51:43 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.651 10:51:43 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.651 10:51:43 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.651 10:51:43 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.651 10:51:43 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.651 10:51:43 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.651 10:51:43 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.651 10:51:43 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:36.651 10:51:43 thread -- scripts/common.sh@345 -- # : 1 00:06:36.651 10:51:43 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.651 10:51:43 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.651 10:51:43 thread -- scripts/common.sh@365 -- # decimal 1 00:06:36.651 10:51:43 thread -- scripts/common.sh@353 -- # local d=1 00:06:36.651 10:51:43 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.651 10:51:43 thread -- scripts/common.sh@355 -- # echo 1 00:06:36.651 10:51:43 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.651 10:51:43 thread -- scripts/common.sh@366 -- # decimal 2 00:06:36.651 10:51:43 thread -- scripts/common.sh@353 -- # local d=2 00:06:36.651 10:51:43 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.651 10:51:43 thread -- scripts/common.sh@355 -- # echo 2 00:06:36.651 10:51:43 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.651 10:51:43 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.651 10:51:43 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.651 10:51:43 thread -- scripts/common.sh@368 -- # return 0 00:06:36.651 10:51:43 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.651 10:51:43 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:36.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.651 --rc genhtml_branch_coverage=1 00:06:36.651 --rc genhtml_function_coverage=1 00:06:36.651 --rc genhtml_legend=1 00:06:36.651 --rc geninfo_all_blocks=1 00:06:36.651 --rc geninfo_unexecuted_blocks=1 00:06:36.651 00:06:36.651 ' 00:06:36.651 10:51:43 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:36.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.651 --rc genhtml_branch_coverage=1 00:06:36.651 --rc genhtml_function_coverage=1 00:06:36.651 --rc genhtml_legend=1 00:06:36.651 --rc geninfo_all_blocks=1 00:06:36.651 --rc geninfo_unexecuted_blocks=1 00:06:36.651 00:06:36.651 ' 00:06:36.651 10:51:43 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:36.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.651 --rc genhtml_branch_coverage=1 00:06:36.651 --rc genhtml_function_coverage=1 00:06:36.651 --rc genhtml_legend=1 00:06:36.651 --rc geninfo_all_blocks=1 00:06:36.651 --rc geninfo_unexecuted_blocks=1 00:06:36.651 00:06:36.651 ' 00:06:36.651 10:51:43 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:36.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.652 --rc genhtml_branch_coverage=1 00:06:36.652 --rc genhtml_function_coverage=1 00:06:36.652 --rc genhtml_legend=1 00:06:36.652 --rc geninfo_all_blocks=1 00:06:36.652 --rc geninfo_unexecuted_blocks=1 00:06:36.652 00:06:36.652 ' 00:06:36.652 10:51:43 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:36.652 10:51:43 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:36.652 10:51:43 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:36.652 10:51:43 thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.652 ************************************ 00:06:36.652 START TEST thread_poller_perf 00:06:36.652 ************************************ 00:06:36.652 10:51:43 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:36.910 [2024-11-15 10:51:43.577334] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:06:36.910 [2024-11-15 10:51:43.577492] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59858 ] 00:06:36.910 [2024-11-15 10:51:43.775894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.170 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:37.170 [2024-11-15 10:51:43.892554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.544 [2024-11-15T10:51:45.472Z] ====================================== 00:06:38.544 [2024-11-15T10:51:45.472Z] busy:2303097212 (cyc) 00:06:38.544 [2024-11-15T10:51:45.472Z] total_run_count: 382000 00:06:38.544 [2024-11-15T10:51:45.472Z] tsc_hz: 2290000000 (cyc) 00:06:38.544 [2024-11-15T10:51:45.472Z] ====================================== 00:06:38.544 [2024-11-15T10:51:45.472Z] poller_cost: 6029 (cyc), 2632 (nsec) 00:06:38.544 00:06:38.544 real 0m1.602s 00:06:38.544 user 0m1.396s 00:06:38.544 sys 0m0.099s 00:06:38.544 10:51:45 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:38.544 10:51:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:38.544 ************************************ 00:06:38.544 END TEST thread_poller_perf 00:06:38.544 ************************************ 00:06:38.544 10:51:45 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:38.544 10:51:45 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:38.544 10:51:45 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:38.544 10:51:45 thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.544 ************************************ 00:06:38.544 START TEST thread_poller_perf 00:06:38.544 ************************************ 00:06:38.544 10:51:45 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:38.544 [2024-11-15 10:51:45.255785] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:06:38.544 [2024-11-15 10:51:45.255936] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59891 ] 00:06:38.544 [2024-11-15 10:51:45.431101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.803 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:38.803 [2024-11-15 10:51:45.547481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.179 [2024-11-15T10:51:47.107Z] ====================================== 00:06:40.179 [2024-11-15T10:51:47.107Z] busy:2293403792 (cyc) 00:06:40.179 [2024-11-15T10:51:47.107Z] total_run_count: 4855000 00:06:40.179 [2024-11-15T10:51:47.107Z] tsc_hz: 2290000000 (cyc) 00:06:40.179 [2024-11-15T10:51:47.107Z] ====================================== 00:06:40.179 [2024-11-15T10:51:47.107Z] poller_cost: 472 (cyc), 206 (nsec) 00:06:40.179 00:06:40.179 real 0m1.578s 00:06:40.179 user 0m1.381s 00:06:40.179 sys 0m0.090s 00:06:40.179 10:51:46 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:40.179 10:51:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:40.179 ************************************ 00:06:40.179 END TEST thread_poller_perf 00:06:40.179 ************************************ 00:06:40.179 10:51:46 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:40.179 ************************************ 00:06:40.179 END TEST thread 00:06:40.179 ************************************ 00:06:40.179 00:06:40.179 real 0m3.527s 00:06:40.179 user 0m2.933s 00:06:40.179 sys 0m0.395s 00:06:40.179 10:51:46 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:40.179 10:51:46 thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.179 10:51:46 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:40.179 10:51:46 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:40.179 10:51:46 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:40.179 10:51:46 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:40.179 10:51:46 -- common/autotest_common.sh@10 -- # set +x 00:06:40.179 ************************************ 00:06:40.179 START TEST app_cmdline 00:06:40.179 ************************************ 00:06:40.179 10:51:46 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:40.179 * Looking for test storage... 00:06:40.179 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:40.179 10:51:47 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:40.179 10:51:47 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:40.179 10:51:47 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:06:40.179 10:51:47 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:40.179 10:51:47 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.179 10:51:47 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.179 10:51:47 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.179 10:51:47 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.179 10:51:47 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.179 10:51:47 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.179 10:51:47 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.179 10:51:47 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.179 10:51:47 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.179 10:51:47 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.179 10:51:47 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.179 10:51:47 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:40.179 10:51:47 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:40.179 10:51:47 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.179 10:51:47 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.179 10:51:47 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:40.179 10:51:47 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:40.179 10:51:47 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.179 10:51:47 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:40.179 10:51:47 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.438 10:51:47 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:40.438 10:51:47 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:40.438 10:51:47 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.438 10:51:47 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:40.438 10:51:47 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.438 10:51:47 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.438 10:51:47 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.438 10:51:47 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:40.438 10:51:47 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.438 10:51:47 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:40.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.439 --rc genhtml_branch_coverage=1 00:06:40.439 --rc genhtml_function_coverage=1 00:06:40.439 --rc genhtml_legend=1 00:06:40.439 --rc geninfo_all_blocks=1 00:06:40.439 --rc geninfo_unexecuted_blocks=1 00:06:40.439 00:06:40.439 ' 00:06:40.439 10:51:47 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:40.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.439 --rc genhtml_branch_coverage=1 00:06:40.439 --rc genhtml_function_coverage=1 00:06:40.439 --rc genhtml_legend=1 00:06:40.439 --rc geninfo_all_blocks=1 00:06:40.439 --rc geninfo_unexecuted_blocks=1 00:06:40.439 00:06:40.439 ' 00:06:40.439 10:51:47 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:40.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.439 --rc genhtml_branch_coverage=1 00:06:40.439 --rc genhtml_function_coverage=1 00:06:40.439 --rc genhtml_legend=1 00:06:40.439 --rc geninfo_all_blocks=1 00:06:40.439 --rc geninfo_unexecuted_blocks=1 00:06:40.439 00:06:40.439 ' 00:06:40.439 10:51:47 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:40.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.439 --rc genhtml_branch_coverage=1 00:06:40.439 --rc genhtml_function_coverage=1 00:06:40.439 --rc genhtml_legend=1 00:06:40.439 --rc geninfo_all_blocks=1 00:06:40.439 --rc geninfo_unexecuted_blocks=1 00:06:40.439 00:06:40.439 ' 00:06:40.439 10:51:47 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:40.439 10:51:47 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59980 00:06:40.439 10:51:47 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:40.439 10:51:47 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59980 00:06:40.439 10:51:47 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 59980 ']' 00:06:40.439 10:51:47 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.439 10:51:47 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:40.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.439 10:51:47 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.439 10:51:47 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:40.439 10:51:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:40.439 [2024-11-15 10:51:47.209816] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:06:40.439 [2024-11-15 10:51:47.209934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59980 ] 00:06:40.697 [2024-11-15 10:51:47.386565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.697 [2024-11-15 10:51:47.512651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.633 10:51:48 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:41.633 10:51:48 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:06:41.633 10:51:48 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:41.891 { 00:06:41.891 "version": "SPDK v25.01-pre git sha1 1a15c7136", 00:06:41.891 "fields": { 00:06:41.891 "major": 25, 00:06:41.891 "minor": 1, 00:06:41.891 "patch": 0, 00:06:41.891 "suffix": "-pre", 00:06:41.891 "commit": "1a15c7136" 00:06:41.891 } 00:06:41.891 } 00:06:41.891 10:51:48 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:41.891 10:51:48 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:41.891 10:51:48 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:41.891 10:51:48 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:41.891 10:51:48 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:41.891 10:51:48 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:41.891 10:51:48 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.891 10:51:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:41.891 10:51:48 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:41.891 10:51:48 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.891 10:51:48 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:41.891 10:51:48 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:41.891 10:51:48 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:41.891 10:51:48 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:41.891 10:51:48 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:41.891 10:51:48 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:41.891 10:51:48 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.891 10:51:48 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:41.891 10:51:48 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.891 10:51:48 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:41.891 10:51:48 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.891 10:51:48 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:41.891 10:51:48 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:41.891 10:51:48 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:42.151 request: 00:06:42.151 { 00:06:42.151 "method": "env_dpdk_get_mem_stats", 00:06:42.151 "req_id": 1 00:06:42.151 } 00:06:42.151 Got JSON-RPC error response 00:06:42.151 response: 00:06:42.151 { 00:06:42.151 "code": -32601, 00:06:42.151 "message": "Method not found" 00:06:42.151 } 00:06:42.151 10:51:48 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:42.151 10:51:48 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:42.151 10:51:48 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:42.151 10:51:48 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:42.151 10:51:48 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59980 00:06:42.151 10:51:48 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 59980 ']' 00:06:42.151 10:51:48 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 59980 00:06:42.151 10:51:48 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:06:42.151 10:51:48 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:42.151 10:51:48 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59980 00:06:42.151 10:51:48 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:42.151 10:51:48 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:42.151 killing process with pid 59980 00:06:42.151 10:51:48 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59980' 00:06:42.151 10:51:48 app_cmdline -- common/autotest_common.sh@971 -- # kill 59980 00:06:42.151 10:51:48 app_cmdline -- common/autotest_common.sh@976 -- # wait 59980 00:06:44.688 00:06:44.688 real 0m4.480s 00:06:44.688 user 0m4.749s 00:06:44.688 sys 0m0.611s 00:06:44.688 10:51:51 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:44.688 10:51:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:44.688 ************************************ 00:06:44.688 END TEST app_cmdline 00:06:44.688 ************************************ 00:06:44.688 10:51:51 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:44.688 10:51:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:44.688 10:51:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:44.688 10:51:51 -- common/autotest_common.sh@10 -- # set +x 00:06:44.688 ************************************ 00:06:44.688 START TEST version 00:06:44.688 ************************************ 00:06:44.688 10:51:51 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:44.688 * Looking for test storage... 00:06:44.688 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:44.688 10:51:51 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:44.689 10:51:51 version -- common/autotest_common.sh@1691 -- # lcov --version 00:06:44.689 10:51:51 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:44.948 10:51:51 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:44.948 10:51:51 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.948 10:51:51 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.948 10:51:51 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.948 10:51:51 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.948 10:51:51 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.948 10:51:51 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.948 10:51:51 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.948 10:51:51 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.948 10:51:51 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.948 10:51:51 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.948 10:51:51 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.948 10:51:51 version -- scripts/common.sh@344 -- # case "$op" in 00:06:44.948 10:51:51 version -- scripts/common.sh@345 -- # : 1 00:06:44.948 10:51:51 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.948 10:51:51 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.948 10:51:51 version -- scripts/common.sh@365 -- # decimal 1 00:06:44.948 10:51:51 version -- scripts/common.sh@353 -- # local d=1 00:06:44.948 10:51:51 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.948 10:51:51 version -- scripts/common.sh@355 -- # echo 1 00:06:44.948 10:51:51 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.948 10:51:51 version -- scripts/common.sh@366 -- # decimal 2 00:06:44.948 10:51:51 version -- scripts/common.sh@353 -- # local d=2 00:06:44.948 10:51:51 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.948 10:51:51 version -- scripts/common.sh@355 -- # echo 2 00:06:44.948 10:51:51 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.948 10:51:51 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.948 10:51:51 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.948 10:51:51 version -- scripts/common.sh@368 -- # return 0 00:06:44.948 10:51:51 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.948 10:51:51 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:44.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.948 --rc genhtml_branch_coverage=1 00:06:44.948 --rc genhtml_function_coverage=1 00:06:44.948 --rc genhtml_legend=1 00:06:44.948 --rc geninfo_all_blocks=1 00:06:44.948 --rc geninfo_unexecuted_blocks=1 00:06:44.948 00:06:44.948 ' 00:06:44.948 10:51:51 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:44.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.948 --rc genhtml_branch_coverage=1 00:06:44.948 --rc genhtml_function_coverage=1 00:06:44.948 --rc genhtml_legend=1 00:06:44.948 --rc geninfo_all_blocks=1 00:06:44.948 --rc geninfo_unexecuted_blocks=1 00:06:44.948 00:06:44.948 ' 00:06:44.948 10:51:51 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:44.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.948 --rc genhtml_branch_coverage=1 00:06:44.948 --rc genhtml_function_coverage=1 00:06:44.948 --rc genhtml_legend=1 00:06:44.948 --rc geninfo_all_blocks=1 00:06:44.948 --rc geninfo_unexecuted_blocks=1 00:06:44.948 00:06:44.948 ' 00:06:44.948 10:51:51 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:44.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.948 --rc genhtml_branch_coverage=1 00:06:44.948 --rc genhtml_function_coverage=1 00:06:44.948 --rc genhtml_legend=1 00:06:44.948 --rc geninfo_all_blocks=1 00:06:44.948 --rc geninfo_unexecuted_blocks=1 00:06:44.948 00:06:44.948 ' 00:06:44.948 10:51:51 version -- app/version.sh@17 -- # get_header_version major 00:06:44.948 10:51:51 version -- app/version.sh@14 -- # cut -f2 00:06:44.948 10:51:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:44.948 10:51:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:44.948 10:51:51 version -- app/version.sh@17 -- # major=25 00:06:44.948 10:51:51 version -- app/version.sh@18 -- # get_header_version minor 00:06:44.948 10:51:51 version -- app/version.sh@14 -- # cut -f2 00:06:44.949 10:51:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:44.949 10:51:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:44.949 10:51:51 version -- app/version.sh@18 -- # minor=1 00:06:44.949 10:51:51 version -- app/version.sh@19 -- # get_header_version patch 00:06:44.949 10:51:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:44.949 10:51:51 version -- app/version.sh@14 -- # cut -f2 00:06:44.949 10:51:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:44.949 10:51:51 version -- app/version.sh@19 -- # patch=0 00:06:44.949 10:51:51 version -- app/version.sh@20 -- # get_header_version suffix 00:06:44.949 10:51:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:44.949 10:51:51 version -- app/version.sh@14 -- # cut -f2 00:06:44.949 10:51:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:44.949 10:51:51 version -- app/version.sh@20 -- # suffix=-pre 00:06:44.949 10:51:51 version -- app/version.sh@22 -- # version=25.1 00:06:44.949 10:51:51 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:44.949 10:51:51 version -- app/version.sh@28 -- # version=25.1rc0 00:06:44.949 10:51:51 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:44.949 10:51:51 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:44.949 10:51:51 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:44.949 10:51:51 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:44.949 00:06:44.949 real 0m0.310s 00:06:44.949 user 0m0.182s 00:06:44.949 sys 0m0.186s 00:06:44.949 10:51:51 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:44.949 10:51:51 version -- common/autotest_common.sh@10 -- # set +x 00:06:44.949 ************************************ 00:06:44.949 END TEST version 00:06:44.949 ************************************ 00:06:44.949 10:51:51 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:44.949 10:51:51 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:44.949 10:51:51 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:44.949 10:51:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:44.949 10:51:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:44.949 10:51:51 -- common/autotest_common.sh@10 -- # set +x 00:06:44.949 ************************************ 00:06:44.949 START TEST bdev_raid 00:06:44.949 ************************************ 00:06:44.949 10:51:51 bdev_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:45.209 * Looking for test storage... 00:06:45.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:45.209 10:51:51 bdev_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:45.209 10:51:51 bdev_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:06:45.209 10:51:51 bdev_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:45.209 10:51:51 bdev_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:45.209 10:51:52 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.209 10:51:52 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.209 10:51:52 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.209 10:51:52 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.209 10:51:52 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.209 10:51:52 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.209 10:51:52 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.209 10:51:52 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.209 10:51:52 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.209 10:51:52 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.209 10:51:52 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.209 10:51:52 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:45.209 10:51:52 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:45.209 10:51:52 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.209 10:51:52 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.209 10:51:52 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:45.209 10:51:52 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:45.209 10:51:52 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.209 10:51:52 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:45.209 10:51:52 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.209 10:51:52 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:45.209 10:51:52 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:45.209 10:51:52 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.209 10:51:52 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:45.209 10:51:52 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.209 10:51:52 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.209 10:51:52 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.209 10:51:52 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:45.209 10:51:52 bdev_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.209 10:51:52 bdev_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:45.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.209 --rc genhtml_branch_coverage=1 00:06:45.209 --rc genhtml_function_coverage=1 00:06:45.209 --rc genhtml_legend=1 00:06:45.209 --rc geninfo_all_blocks=1 00:06:45.209 --rc geninfo_unexecuted_blocks=1 00:06:45.209 00:06:45.209 ' 00:06:45.209 10:51:52 bdev_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:45.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.209 --rc genhtml_branch_coverage=1 00:06:45.209 --rc genhtml_function_coverage=1 00:06:45.209 --rc genhtml_legend=1 00:06:45.209 --rc geninfo_all_blocks=1 00:06:45.209 --rc geninfo_unexecuted_blocks=1 00:06:45.209 00:06:45.209 ' 00:06:45.209 10:51:52 bdev_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:45.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.209 --rc genhtml_branch_coverage=1 00:06:45.209 --rc genhtml_function_coverage=1 00:06:45.209 --rc genhtml_legend=1 00:06:45.209 --rc geninfo_all_blocks=1 00:06:45.209 --rc geninfo_unexecuted_blocks=1 00:06:45.209 00:06:45.209 ' 00:06:45.209 10:51:52 bdev_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:45.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.209 --rc genhtml_branch_coverage=1 00:06:45.209 --rc genhtml_function_coverage=1 00:06:45.209 --rc genhtml_legend=1 00:06:45.209 --rc geninfo_all_blocks=1 00:06:45.209 --rc geninfo_unexecuted_blocks=1 00:06:45.209 00:06:45.209 ' 00:06:45.209 10:51:52 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:45.209 10:51:52 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:45.209 10:51:52 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:45.209 10:51:52 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:45.209 10:51:52 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:45.209 10:51:52 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:45.209 10:51:52 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:45.209 10:51:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:45.209 10:51:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:45.209 10:51:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:45.209 ************************************ 00:06:45.209 START TEST raid1_resize_data_offset_test 00:06:45.209 ************************************ 00:06:45.209 10:51:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1127 -- # raid_resize_data_offset_test 00:06:45.209 10:51:52 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60174 00:06:45.209 Process raid pid: 60174 00:06:45.209 10:51:52 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60174' 00:06:45.209 10:51:52 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60174 00:06:45.209 10:51:52 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:45.209 10:51:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@833 -- # '[' -z 60174 ']' 00:06:45.209 10:51:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.209 10:51:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:45.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.209 10:51:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.209 10:51:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:45.209 10:51:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.468 [2024-11-15 10:51:52.139612] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:06:45.468 [2024-11-15 10:51:52.139728] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.468 [2024-11-15 10:51:52.316820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.728 [2024-11-15 10:51:52.434935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.728 [2024-11-15 10:51:52.650735] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:45.728 [2024-11-15 10:51:52.650794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.297 10:51:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:46.297 10:51:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@866 -- # return 0 00:06:46.297 10:51:52 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:46.297 10:51:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.297 10:51:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.297 malloc0 00:06:46.297 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.297 10:51:53 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:46.297 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.297 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.297 malloc1 00:06:46.297 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.297 10:51:53 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:46.297 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.297 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.297 null0 00:06:46.297 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.297 10:51:53 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:46.297 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.297 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.297 [2024-11-15 10:51:53.176499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:46.297 [2024-11-15 10:51:53.178454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:46.297 [2024-11-15 10:51:53.178519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:46.297 [2024-11-15 10:51:53.178708] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:46.297 [2024-11-15 10:51:53.178724] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:46.297 [2024-11-15 10:51:53.179020] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:46.297 [2024-11-15 10:51:53.179232] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:46.297 [2024-11-15 10:51:53.179251] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:46.297 [2024-11-15 10:51:53.179473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:46.297 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.297 10:51:53 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:46.297 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.297 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.297 10:51:53 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:46.297 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.557 10:51:53 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:46.557 10:51:53 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:46.557 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.557 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.557 [2024-11-15 10:51:53.240425] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:46.557 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.557 10:51:53 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:46.557 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.557 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.126 malloc2 00:06:47.126 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.126 10:51:53 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:47.126 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.126 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.126 [2024-11-15 10:51:53.833092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:47.126 [2024-11-15 10:51:53.851804] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:47.126 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.126 [2024-11-15 10:51:53.853624] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:47.126 10:51:53 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:47.126 10:51:53 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:47.126 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.126 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.126 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.126 10:51:53 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:47.126 10:51:53 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60174 00:06:47.126 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@952 -- # '[' -z 60174 ']' 00:06:47.126 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # kill -0 60174 00:06:47.126 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # uname 00:06:47.126 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:47.126 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60174 00:06:47.126 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:47.126 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:47.126 killing process with pid 60174 00:06:47.126 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60174' 00:06:47.126 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@971 -- # kill 60174 00:06:47.126 10:51:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@976 -- # wait 60174 00:06:47.126 [2024-11-15 10:51:53.936262] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:47.126 [2024-11-15 10:51:53.938104] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:47.126 [2024-11-15 10:51:53.938168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:47.126 [2024-11-15 10:51:53.938187] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:47.126 [2024-11-15 10:51:53.975804] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:47.126 [2024-11-15 10:51:53.976186] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:47.126 [2024-11-15 10:51:53.976215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:49.032 [2024-11-15 10:51:55.822573] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:50.410 10:51:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:50.410 00:06:50.410 real 0m4.908s 00:06:50.410 user 0m4.830s 00:06:50.410 sys 0m0.509s 00:06:50.410 10:51:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:50.410 10:51:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.410 ************************************ 00:06:50.410 END TEST raid1_resize_data_offset_test 00:06:50.410 ************************************ 00:06:50.410 10:51:57 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:50.410 10:51:57 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:50.410 10:51:57 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:50.410 10:51:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:50.411 ************************************ 00:06:50.411 START TEST raid0_resize_superblock_test 00:06:50.411 ************************************ 00:06:50.411 10:51:57 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 0 00:06:50.411 10:51:57 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:50.411 10:51:57 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60259 00:06:50.411 10:51:57 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:50.411 Process raid pid: 60259 00:06:50.411 10:51:57 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60259' 00:06:50.411 10:51:57 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60259 00:06:50.411 10:51:57 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60259 ']' 00:06:50.411 10:51:57 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.411 10:51:57 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:50.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.411 10:51:57 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.411 10:51:57 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:50.411 10:51:57 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.411 [2024-11-15 10:51:57.133900] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:06:50.411 [2024-11-15 10:51:57.134020] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.411 [2024-11-15 10:51:57.290968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.670 [2024-11-15 10:51:57.415148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.929 [2024-11-15 10:51:57.623592] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:50.929 [2024-11-15 10:51:57.623637] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.188 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:51.188 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:06:51.188 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:51.189 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.189 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.756 malloc0 00:06:51.756 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.756 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:51.756 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.756 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.756 [2024-11-15 10:51:58.588433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:51.756 [2024-11-15 10:51:58.588520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:51.756 [2024-11-15 10:51:58.588555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:51.756 [2024-11-15 10:51:58.588581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:51.756 [2024-11-15 10:51:58.590935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:51.756 [2024-11-15 10:51:58.590985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:51.756 pt0 00:06:51.756 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.756 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:51.756 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.756 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.016 a53d7c14-7792-463b-9c4b-bbe5985c530e 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.016 3ca4127b-4838-4756-ae09-609a589c2099 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.016 ea86872b-3f29-416a-a546-5525fcd2b3c0 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.016 [2024-11-15 10:51:58.721946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3ca4127b-4838-4756-ae09-609a589c2099 is claimed 00:06:52.016 [2024-11-15 10:51:58.722075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ea86872b-3f29-416a-a546-5525fcd2b3c0 is claimed 00:06:52.016 [2024-11-15 10:51:58.722258] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:52.016 [2024-11-15 10:51:58.722286] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:52.016 [2024-11-15 10:51:58.722611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:52.016 [2024-11-15 10:51:58.722852] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:52.016 [2024-11-15 10:51:58.722873] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:52.016 [2024-11-15 10:51:58.723088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:52.016 [2024-11-15 10:51:58.830036] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.016 [2024-11-15 10:51:58.861941] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:52.016 [2024-11-15 10:51:58.861979] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '3ca4127b-4838-4756-ae09-609a589c2099' was resized: old size 131072, new size 204800 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.016 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.016 [2024-11-15 10:51:58.873856] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:52.017 [2024-11-15 10:51:58.873893] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ea86872b-3f29-416a-a546-5525fcd2b3c0' was resized: old size 131072, new size 204800 00:06:52.017 [2024-11-15 10:51:58.873932] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:52.017 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.017 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:52.017 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:52.017 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.017 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.017 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.017 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:52.017 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:52.017 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:52.017 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.017 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.277 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.277 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:52.277 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:52.277 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:52.277 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:52.277 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:52.277 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.277 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.277 [2024-11-15 10:51:58.957848] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.277 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.277 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:52.277 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:52.277 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:52.277 10:51:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:52.277 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.277 10:51:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.277 [2024-11-15 10:51:59.001504] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:52.277 [2024-11-15 10:51:59.001595] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:52.277 [2024-11-15 10:51:59.001618] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:52.277 [2024-11-15 10:51:59.001643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:52.277 [2024-11-15 10:51:59.001784] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.277 [2024-11-15 10:51:59.001838] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:52.277 [2024-11-15 10:51:59.001861] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:52.277 10:51:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.277 10:51:59 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:52.277 10:51:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.277 10:51:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.277 [2024-11-15 10:51:59.013407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:52.277 [2024-11-15 10:51:59.013478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:52.277 [2024-11-15 10:51:59.013527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:52.277 [2024-11-15 10:51:59.013552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:52.277 [2024-11-15 10:51:59.015743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:52.277 [2024-11-15 10:51:59.015787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:52.277 [2024-11-15 10:51:59.017706] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 3ca4127b-4838-4756-ae09-609a589c2099 00:06:52.277 [2024-11-15 10:51:59.017793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3ca4127b-4838-4756-ae09-609a589c2099 is claimed 00:06:52.277 [2024-11-15 10:51:59.017934] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ea86872b-3f29-416a-a546-5525fcd2b3c0 00:06:52.277 [2024-11-15 10:51:59.017972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ea86872b-3f29-416a-a546-5525fcd2b3c0 is claimed 00:06:52.277 [2024-11-15 10:51:59.018171] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev ea86872b-3f29-416a-a546-5525fcd2b3c0 (2) smaller than existing raid bdev Raid (3) 00:06:52.277 [2024-11-15 10:51:59.018207] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 3ca4127b-4838-4756-ae09-609a589c2099: File exists 00:06:52.277 [2024-11-15 10:51:59.018256] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:52.277 [2024-11-15 10:51:59.018277] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:52.277 [2024-11-15 10:51:59.018579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:52.278 pt0 00:06:52.278 [2024-11-15 10:51:59.018771] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:52.278 [2024-11-15 10:51:59.018790] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:52.278 [2024-11-15 10:51:59.019003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:52.278 10:51:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.278 10:51:59 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:52.278 10:51:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.278 10:51:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.278 10:51:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.278 10:51:59 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:52.278 10:51:59 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:52.278 10:51:59 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:52.278 10:51:59 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:52.278 10:51:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.278 10:51:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.278 [2024-11-15 10:51:59.041966] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.278 10:51:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.278 10:51:59 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:52.278 10:51:59 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:52.278 10:51:59 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:52.278 10:51:59 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60259 00:06:52.278 10:51:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60259 ']' 00:06:52.278 10:51:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60259 00:06:52.278 10:51:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:06:52.278 10:51:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:52.278 10:51:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60259 00:06:52.278 10:51:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:52.278 10:51:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:52.278 killing process with pid 60259 00:06:52.278 10:51:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60259' 00:06:52.278 10:51:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 60259 00:06:52.278 10:51:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 60259 00:06:52.278 [2024-11-15 10:51:59.121638] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:52.278 [2024-11-15 10:51:59.121756] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.278 [2024-11-15 10:51:59.121836] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:52.278 [2024-11-15 10:51:59.121857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:54.222 [2024-11-15 10:52:00.620343] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:55.160 10:52:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:55.160 00:06:55.160 real 0m4.745s 00:06:55.160 user 0m4.956s 00:06:55.160 sys 0m0.584s 00:06:55.160 10:52:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:55.160 10:52:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.160 ************************************ 00:06:55.160 END TEST raid0_resize_superblock_test 00:06:55.160 ************************************ 00:06:55.160 10:52:01 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:55.160 10:52:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:55.160 10:52:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:55.160 10:52:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:55.160 ************************************ 00:06:55.160 START TEST raid1_resize_superblock_test 00:06:55.160 ************************************ 00:06:55.160 10:52:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 1 00:06:55.160 10:52:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:55.160 10:52:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60363 00:06:55.160 10:52:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:55.160 Process raid pid: 60363 00:06:55.160 10:52:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60363' 00:06:55.160 10:52:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60363 00:06:55.161 10:52:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60363 ']' 00:06:55.161 10:52:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.161 10:52:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:55.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.161 10:52:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.161 10:52:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:55.161 10:52:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.161 [2024-11-15 10:52:01.921799] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:06:55.161 [2024-11-15 10:52:01.921921] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.161 [2024-11-15 10:52:02.078204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.420 [2024-11-15 10:52:02.195403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.699 [2024-11-15 10:52:02.403689] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.699 [2024-11-15 10:52:02.403739] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.959 10:52:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:55.959 10:52:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:06:55.959 10:52:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:55.959 10:52:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.959 10:52:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.527 malloc0 00:06:56.527 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.527 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:56.527 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.527 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.527 [2024-11-15 10:52:03.369657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:56.527 [2024-11-15 10:52:03.369722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:56.527 [2024-11-15 10:52:03.369744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:56.527 [2024-11-15 10:52:03.369757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:56.527 [2024-11-15 10:52:03.372011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:56.527 [2024-11-15 10:52:03.372052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:56.527 pt0 00:06:56.527 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.527 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:56.527 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.527 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.787 e19c0f41-adf2-48fe-89f1-62a8c159d911 00:06:56.787 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.787 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:56.787 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.787 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.787 687ba6f7-16ce-4056-83d3-eb1798ba8d3d 00:06:56.787 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.787 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:56.787 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.787 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.787 86f69ede-c594-4f5c-9264-3f72cab9c40c 00:06:56.787 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.787 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:56.787 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:56.787 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.788 [2024-11-15 10:52:03.503249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 687ba6f7-16ce-4056-83d3-eb1798ba8d3d is claimed 00:06:56.788 [2024-11-15 10:52:03.503356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 86f69ede-c594-4f5c-9264-3f72cab9c40c is claimed 00:06:56.788 [2024-11-15 10:52:03.503485] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:56.788 [2024-11-15 10:52:03.503507] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:56.788 [2024-11-15 10:52:03.503773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:56.788 [2024-11-15 10:52:03.503975] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:56.788 [2024-11-15 10:52:03.503996] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:56.788 [2024-11-15 10:52:03.504167] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.788 [2024-11-15 10:52:03.623343] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.788 [2024-11-15 10:52:03.655247] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:56.788 [2024-11-15 10:52:03.655288] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '687ba6f7-16ce-4056-83d3-eb1798ba8d3d' was resized: old size 131072, new size 204800 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.788 [2024-11-15 10:52:03.667117] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:56.788 [2024-11-15 10:52:03.667150] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '86f69ede-c594-4f5c-9264-3f72cab9c40c' was resized: old size 131072, new size 204800 00:06:56.788 [2024-11-15 10:52:03.667174] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:56.788 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.111 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:57.111 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:57.111 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:57.111 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.111 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.111 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.111 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:57.111 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:57.111 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:57.111 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:57.111 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:57.111 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.111 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.111 [2024-11-15 10:52:03.767051] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:57.111 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.111 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.112 [2024-11-15 10:52:03.794787] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:57.112 [2024-11-15 10:52:03.794863] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:57.112 [2024-11-15 10:52:03.794895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:57.112 [2024-11-15 10:52:03.795079] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:57.112 [2024-11-15 10:52:03.795298] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:57.112 [2024-11-15 10:52:03.795394] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:57.112 [2024-11-15 10:52:03.795414] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.112 [2024-11-15 10:52:03.802679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:57.112 [2024-11-15 10:52:03.802736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:57.112 [2024-11-15 10:52:03.802758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:57.112 [2024-11-15 10:52:03.802772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:57.112 [2024-11-15 10:52:03.805165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:57.112 [2024-11-15 10:52:03.805207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:57.112 [2024-11-15 10:52:03.807037] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 687ba6f7-16ce-4056-83d3-eb1798ba8d3d 00:06:57.112 [2024-11-15 10:52:03.807128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 687ba6f7-16ce-4056-83d3-eb1798ba8d3d is claimed 00:06:57.112 [2024-11-15 10:52:03.807253] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 86f69ede-c594-4f5c-9264-3f72cab9c40c 00:06:57.112 [2024-11-15 10:52:03.807280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 86f69ede-c594-4f5c-9264-3f72cab9c40c is claimed 00:06:57.112 [2024-11-15 10:52:03.807473] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 86f69ede-c594-4f5c-9264-3f72cab9c40c (2) smaller than existing raid bdev Raid (3) 00:06:57.112 [2024-11-15 10:52:03.807502] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 687ba6f7-16ce-4056-83d3-eb1798ba8d3d: File exists 00:06:57.112 [2024-11-15 10:52:03.807539] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:57.112 [2024-11-15 10:52:03.807554] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:57.112 [2024-11-15 10:52:03.807812] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:57.112 pt0 00:06:57.112 [2024-11-15 10:52:03.808003] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:57.112 [2024-11-15 10:52:03.808016] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:57.112 [2024-11-15 10:52:03.808190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.112 [2024-11-15 10:52:03.831368] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60363 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60363 ']' 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60363 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60363 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60363' 00:06:57.112 killing process with pid 60363 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 60363 00:06:57.112 [2024-11-15 10:52:03.908298] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:57.112 [2024-11-15 10:52:03.908413] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:57.112 10:52:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 60363 00:06:57.112 [2024-11-15 10:52:03.908482] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:57.112 [2024-11-15 10:52:03.908500] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:59.032 [2024-11-15 10:52:05.433397] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:59.969 10:52:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:59.969 00:06:59.969 real 0m4.748s 00:06:59.969 user 0m4.967s 00:06:59.969 sys 0m0.570s 00:06:59.969 10:52:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:59.969 10:52:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.969 ************************************ 00:06:59.969 END TEST raid1_resize_superblock_test 00:06:59.969 ************************************ 00:06:59.969 10:52:06 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:59.969 10:52:06 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:59.969 10:52:06 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:59.969 10:52:06 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:59.969 10:52:06 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:59.969 10:52:06 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:59.969 10:52:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:59.970 10:52:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:59.970 10:52:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:59.970 ************************************ 00:06:59.970 START TEST raid_function_test_raid0 00:06:59.970 ************************************ 00:06:59.970 10:52:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1127 -- # raid_function_test raid0 00:06:59.970 10:52:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:59.970 10:52:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:59.970 10:52:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:59.970 10:52:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60460 00:06:59.970 10:52:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:59.970 10:52:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60460' 00:06:59.970 Process raid pid: 60460 00:06:59.970 10:52:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60460 00:06:59.970 10:52:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # '[' -z 60460 ']' 00:06:59.970 10:52:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.970 10:52:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:59.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.970 10:52:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.970 10:52:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:59.970 10:52:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:59.970 [2024-11-15 10:52:06.742267] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:06:59.970 [2024-11-15 10:52:06.742418] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.230 [2024-11-15 10:52:06.900087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.230 [2024-11-15 10:52:07.025252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.490 [2024-11-15 10:52:07.240999] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.490 [2024-11-15 10:52:07.241049] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.749 10:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:00.749 10:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # return 0 00:07:00.749 10:52:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:00.749 10:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.749 10:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:00.749 Base_1 00:07:00.749 10:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.749 10:52:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:00.749 10:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.749 10:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:01.011 Base_2 00:07:01.011 10:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.011 10:52:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:01.011 10:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.011 10:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:01.011 [2024-11-15 10:52:07.694807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:01.011 [2024-11-15 10:52:07.696861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:01.011 [2024-11-15 10:52:07.696948] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:01.011 [2024-11-15 10:52:07.696960] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:01.011 [2024-11-15 10:52:07.697253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:01.011 [2024-11-15 10:52:07.697426] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:01.011 [2024-11-15 10:52:07.697441] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:01.011 [2024-11-15 10:52:07.697613] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:01.011 10:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.011 10:52:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:01.011 10:52:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:01.011 10:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.011 10:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:01.011 10:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.011 10:52:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:01.011 10:52:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:01.011 10:52:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:01.011 10:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:01.011 10:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:01.011 10:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:01.011 10:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:01.011 10:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:01.011 10:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:01.011 10:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:01.011 10:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:01.011 10:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:01.270 [2024-11-15 10:52:07.942472] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:01.270 /dev/nbd0 00:07:01.270 10:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:01.270 10:52:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:01.270 10:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:01.270 10:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # local i 00:07:01.270 10:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:01.270 10:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:01.270 10:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:01.270 10:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # break 00:07:01.270 10:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:01.270 10:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:01.270 10:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.270 1+0 records in 00:07:01.270 1+0 records out 00:07:01.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386678 s, 10.6 MB/s 00:07:01.270 10:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.270 10:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # size=4096 00:07:01.270 10:52:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.270 10:52:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:01.270 10:52:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # return 0 00:07:01.270 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.270 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:01.270 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:01.270 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:01.270 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:01.530 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:01.530 { 00:07:01.530 "nbd_device": "/dev/nbd0", 00:07:01.530 "bdev_name": "raid" 00:07:01.530 } 00:07:01.530 ]' 00:07:01.530 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:01.530 { 00:07:01.530 "nbd_device": "/dev/nbd0", 00:07:01.530 "bdev_name": "raid" 00:07:01.530 } 00:07:01.530 ]' 00:07:01.530 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.530 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:01.530 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:01.530 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.530 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:01.531 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:01.531 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:01.531 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:01.531 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:01.531 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:01.531 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:01.531 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:01.531 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:01.531 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:01.531 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:01.531 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:01.531 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:01.531 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:01.531 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:01.531 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:01.531 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:01.531 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:01.531 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:01.531 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:01.531 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:01.531 4096+0 records in 00:07:01.531 4096+0 records out 00:07:01.531 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0241107 s, 87.0 MB/s 00:07:01.531 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:01.791 4096+0 records in 00:07:01.791 4096+0 records out 00:07:01.791 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.217998 s, 9.6 MB/s 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:01.791 128+0 records in 00:07:01.791 128+0 records out 00:07:01.791 65536 bytes (66 kB, 64 KiB) copied, 0.000466308 s, 141 MB/s 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:01.791 2035+0 records in 00:07:01.791 2035+0 records out 00:07:01.791 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0199596 s, 52.2 MB/s 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:01.791 456+0 records in 00:07:01.791 456+0 records out 00:07:01.791 233472 bytes (233 kB, 228 KiB) copied, 0.00122836 s, 190 MB/s 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.791 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:02.050 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:02.050 [2024-11-15 10:52:08.904596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:02.050 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:02.050 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:02.051 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.051 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.051 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:02.051 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:02.051 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.051 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:02.051 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:02.051 10:52:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:02.310 10:52:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:02.310 10:52:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.310 10:52:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:02.310 10:52:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:02.310 10:52:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:02.310 10:52:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.310 10:52:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:02.310 10:52:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:02.310 10:52:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:02.310 10:52:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:02.310 10:52:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:02.310 10:52:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60460 00:07:02.310 10:52:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # '[' -z 60460 ']' 00:07:02.310 10:52:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # kill -0 60460 00:07:02.310 10:52:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # uname 00:07:02.310 10:52:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:02.310 10:52:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60460 00:07:02.569 killing process with pid 60460 00:07:02.569 10:52:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:02.569 10:52:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:02.569 10:52:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60460' 00:07:02.569 10:52:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@971 -- # kill 60460 00:07:02.569 [2024-11-15 10:52:09.247905] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:02.569 [2024-11-15 10:52:09.248019] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:02.569 [2024-11-15 10:52:09.248071] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:02.569 [2024-11-15 10:52:09.248088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:02.569 10:52:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@976 -- # wait 60460 00:07:02.569 [2024-11-15 10:52:09.471043] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:03.951 10:52:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:03.951 00:07:03.951 real 0m3.947s 00:07:03.951 user 0m4.653s 00:07:03.951 sys 0m0.935s 00:07:03.951 10:52:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:03.951 10:52:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:03.951 ************************************ 00:07:03.951 END TEST raid_function_test_raid0 00:07:03.951 ************************************ 00:07:03.951 10:52:10 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:03.951 10:52:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:03.951 10:52:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:03.951 10:52:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:03.951 ************************************ 00:07:03.951 START TEST raid_function_test_concat 00:07:03.951 ************************************ 00:07:03.951 10:52:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1127 -- # raid_function_test concat 00:07:03.951 10:52:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:03.951 10:52:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:03.951 10:52:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:03.951 10:52:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60589 00:07:03.951 10:52:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:03.951 Process raid pid: 60589 00:07:03.951 10:52:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60589' 00:07:03.951 10:52:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60589 00:07:03.951 10:52:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # '[' -z 60589 ']' 00:07:03.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.951 10:52:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.951 10:52:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:03.951 10:52:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.951 10:52:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:03.951 10:52:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:03.951 [2024-11-15 10:52:10.765971] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:07:03.951 [2024-11-15 10:52:10.766181] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.223 [2024-11-15 10:52:10.940937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.223 [2024-11-15 10:52:11.055500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.483 [2024-11-15 10:52:11.275122] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.483 [2024-11-15 10:52:11.275265] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.742 10:52:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:04.742 10:52:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # return 0 00:07:04.742 10:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:04.742 10:52:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.742 10:52:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:05.002 Base_1 00:07:05.002 10:52:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.002 10:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:05.002 10:52:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.002 10:52:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:05.002 Base_2 00:07:05.002 10:52:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.002 10:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:05.002 10:52:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.002 10:52:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:05.002 [2024-11-15 10:52:11.719923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:05.002 [2024-11-15 10:52:11.721949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:05.002 [2024-11-15 10:52:11.722049] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:05.002 [2024-11-15 10:52:11.722076] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:05.002 [2024-11-15 10:52:11.722357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:05.002 [2024-11-15 10:52:11.722508] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:05.002 [2024-11-15 10:52:11.722518] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:05.002 [2024-11-15 10:52:11.722711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.002 10:52:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.002 10:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:05.002 10:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:05.002 10:52:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.002 10:52:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:05.002 10:52:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.002 10:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:05.002 10:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:05.002 10:52:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:05.002 10:52:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:05.002 10:52:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:05.002 10:52:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:05.002 10:52:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:05.002 10:52:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:05.002 10:52:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:05.002 10:52:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:05.002 10:52:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:05.002 10:52:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:05.262 [2024-11-15 10:52:11.975545] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:05.262 /dev/nbd0 00:07:05.262 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:05.262 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:05.262 10:52:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:05.262 10:52:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # local i 00:07:05.262 10:52:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:05.262 10:52:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:05.262 10:52:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:05.262 10:52:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # break 00:07:05.262 10:52:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:05.262 10:52:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:05.262 10:52:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.262 1+0 records in 00:07:05.262 1+0 records out 00:07:05.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354077 s, 11.6 MB/s 00:07:05.262 10:52:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.262 10:52:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # size=4096 00:07:05.262 10:52:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.262 10:52:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:05.262 10:52:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # return 0 00:07:05.262 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.262 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:05.262 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:05.262 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:05.262 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:05.522 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:05.522 { 00:07:05.522 "nbd_device": "/dev/nbd0", 00:07:05.522 "bdev_name": "raid" 00:07:05.522 } 00:07:05.522 ]' 00:07:05.522 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:05.522 { 00:07:05.522 "nbd_device": "/dev/nbd0", 00:07:05.522 "bdev_name": "raid" 00:07:05.522 } 00:07:05.522 ]' 00:07:05.522 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.522 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:05.522 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:05.522 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.522 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:05.522 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:05.522 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:05.522 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:05.522 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:05.522 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:05.522 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:05.522 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:05.522 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:05.522 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:05.522 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:05.522 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:05.522 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:05.522 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:05.522 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:05.522 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:05.522 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:05.522 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:05.522 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:05.522 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:05.522 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:05.522 4096+0 records in 00:07:05.522 4096+0 records out 00:07:05.522 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0355495 s, 59.0 MB/s 00:07:05.522 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:05.782 4096+0 records in 00:07:05.782 4096+0 records out 00:07:05.782 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.212051 s, 9.9 MB/s 00:07:05.782 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:05.782 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:05.782 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:05.782 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:05.782 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:05.782 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:05.782 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:05.782 128+0 records in 00:07:05.782 128+0 records out 00:07:05.782 65536 bytes (66 kB, 64 KiB) copied, 0.00179553 s, 36.5 MB/s 00:07:05.782 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:05.782 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:05.782 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:05.782 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:05.782 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:05.782 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:05.782 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:05.782 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:05.782 2035+0 records in 00:07:05.782 2035+0 records out 00:07:05.782 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0154134 s, 67.6 MB/s 00:07:05.782 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:05.782 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:05.782 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:05.782 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:05.782 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:05.782 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:05.782 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:05.782 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:05.782 456+0 records in 00:07:05.782 456+0 records out 00:07:05.782 233472 bytes (233 kB, 228 KiB) copied, 0.00105861 s, 221 MB/s 00:07:05.782 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:05.782 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:05.782 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:06.041 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:06.042 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:06.042 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:06.042 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:06.042 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:06.042 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:06.042 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:06.042 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:06.042 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.042 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:06.300 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.300 [2024-11-15 10:52:12.976400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:06.300 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.300 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.300 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.300 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.300 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.300 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:06.300 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.300 10:52:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:06.300 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:06.300 10:52:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:06.300 10:52:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:06.300 10:52:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:06.300 10:52:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.558 10:52:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:06.558 10:52:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:06.558 10:52:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.558 10:52:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:06.558 10:52:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:06.558 10:52:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:06.559 10:52:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:06.559 10:52:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:06.559 10:52:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60589 00:07:06.559 10:52:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # '[' -z 60589 ']' 00:07:06.559 10:52:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # kill -0 60589 00:07:06.559 10:52:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # uname 00:07:06.559 10:52:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:06.559 10:52:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60589 00:07:06.559 killing process with pid 60589 00:07:06.559 10:52:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:06.559 10:52:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:06.559 10:52:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60589' 00:07:06.559 10:52:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@971 -- # kill 60589 00:07:06.559 [2024-11-15 10:52:13.309754] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:06.559 [2024-11-15 10:52:13.309862] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:06.559 [2024-11-15 10:52:13.309917] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:06.559 10:52:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@976 -- # wait 60589 00:07:06.559 [2024-11-15 10:52:13.309929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:06.817 [2024-11-15 10:52:13.527716] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:08.197 10:52:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:08.197 00:07:08.197 real 0m4.006s 00:07:08.197 user 0m4.708s 00:07:08.197 sys 0m0.984s 00:07:08.197 ************************************ 00:07:08.197 END TEST raid_function_test_concat 00:07:08.197 ************************************ 00:07:08.197 10:52:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:08.197 10:52:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:08.197 10:52:14 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:08.197 10:52:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:08.197 10:52:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:08.197 10:52:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:08.197 ************************************ 00:07:08.197 START TEST raid0_resize_test 00:07:08.197 ************************************ 00:07:08.197 10:52:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 0 00:07:08.198 10:52:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:08.198 10:52:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:08.198 10:52:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:08.198 10:52:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:08.198 10:52:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:08.198 10:52:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:08.198 10:52:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:08.198 10:52:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:08.198 10:52:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60718 00:07:08.198 10:52:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:08.198 10:52:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60718' 00:07:08.198 Process raid pid: 60718 00:07:08.198 10:52:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60718 00:07:08.198 10:52:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # '[' -z 60718 ']' 00:07:08.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.198 10:52:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.198 10:52:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:08.198 10:52:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.198 10:52:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:08.198 10:52:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.198 [2024-11-15 10:52:14.843272] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:07:08.198 [2024-11-15 10:52:14.843406] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.198 [2024-11-15 10:52:15.022327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.457 [2024-11-15 10:52:15.140684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.457 [2024-11-15 10:52:15.347923] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:08.457 [2024-11-15 10:52:15.347973] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@866 -- # return 0 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.026 Base_1 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.026 Base_2 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.026 [2024-11-15 10:52:15.749535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:09.026 [2024-11-15 10:52:15.751377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:09.026 [2024-11-15 10:52:15.751507] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:09.026 [2024-11-15 10:52:15.751526] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:09.026 [2024-11-15 10:52:15.751792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:09.026 [2024-11-15 10:52:15.751944] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:09.026 [2024-11-15 10:52:15.751955] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:09.026 [2024-11-15 10:52:15.752118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.026 [2024-11-15 10:52:15.761485] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:09.026 [2024-11-15 10:52:15.761513] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:09.026 true 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.026 [2024-11-15 10:52:15.777686] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.026 [2024-11-15 10:52:15.825463] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:09.026 [2024-11-15 10:52:15.825554] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:09.026 [2024-11-15 10:52:15.825630] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:09.026 true 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.026 [2024-11-15 10:52:15.841642] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60718 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # '[' -z 60718 ']' 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # kill -0 60718 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # uname 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60718 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60718' 00:07:09.026 killing process with pid 60718 00:07:09.026 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@971 -- # kill 60718 00:07:09.026 [2024-11-15 10:52:15.922135] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:09.027 [2024-11-15 10:52:15.922317] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:09.027 10:52:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@976 -- # wait 60718 00:07:09.027 [2024-11-15 10:52:15.922408] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:09.027 [2024-11-15 10:52:15.922451] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:09.027 [2024-11-15 10:52:15.940589] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:10.408 10:52:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:10.408 00:07:10.408 real 0m2.324s 00:07:10.408 user 0m2.504s 00:07:10.408 sys 0m0.342s 00:07:10.408 10:52:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:10.408 10:52:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.408 ************************************ 00:07:10.408 END TEST raid0_resize_test 00:07:10.408 ************************************ 00:07:10.408 10:52:17 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:10.408 10:52:17 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:10.408 10:52:17 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:10.408 10:52:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:10.408 ************************************ 00:07:10.408 START TEST raid1_resize_test 00:07:10.408 ************************************ 00:07:10.408 10:52:17 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 1 00:07:10.408 10:52:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:10.408 10:52:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:10.408 10:52:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:10.408 10:52:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:10.408 10:52:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:10.408 10:52:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:10.408 10:52:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:10.408 10:52:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:10.408 10:52:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60774 00:07:10.408 10:52:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:10.408 10:52:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60774' 00:07:10.408 Process raid pid: 60774 00:07:10.408 10:52:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60774 00:07:10.408 10:52:17 bdev_raid.raid1_resize_test -- common/autotest_common.sh@833 -- # '[' -z 60774 ']' 00:07:10.408 10:52:17 bdev_raid.raid1_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.408 10:52:17 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:10.408 10:52:17 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.408 10:52:17 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:10.408 10:52:17 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.408 [2024-11-15 10:52:17.249079] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:07:10.409 [2024-11-15 10:52:17.249288] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.673 [2024-11-15 10:52:17.428755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.673 [2024-11-15 10:52:17.550891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.938 [2024-11-15 10:52:17.763275] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.938 [2024-11-15 10:52:17.763428] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.198 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:11.198 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@866 -- # return 0 00:07:11.198 10:52:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:11.198 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.198 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.198 Base_1 00:07:11.198 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.198 10:52:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:11.198 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.198 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.198 Base_2 00:07:11.198 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.198 10:52:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:11.198 10:52:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:11.198 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.198 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.459 [2024-11-15 10:52:18.121405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:11.459 [2024-11-15 10:52:18.123212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:11.459 [2024-11-15 10:52:18.123280] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:11.459 [2024-11-15 10:52:18.123290] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:11.459 [2024-11-15 10:52:18.123579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:11.459 [2024-11-15 10:52:18.123713] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:11.459 [2024-11-15 10:52:18.123722] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:11.459 [2024-11-15 10:52:18.123924] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.459 [2024-11-15 10:52:18.133328] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:11.459 [2024-11-15 10:52:18.133399] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:11.459 true 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.459 [2024-11-15 10:52:18.149465] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.459 [2024-11-15 10:52:18.193251] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:11.459 [2024-11-15 10:52:18.193363] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:11.459 [2024-11-15 10:52:18.193432] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:11.459 true 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.459 [2024-11-15 10:52:18.209439] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60774 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@952 -- # '[' -z 60774 ']' 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # kill -0 60774 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # uname 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60774 00:07:11.459 killing process with pid 60774 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60774' 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@971 -- # kill 60774 00:07:11.459 [2024-11-15 10:52:18.294062] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:11.459 [2024-11-15 10:52:18.294160] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:11.459 10:52:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@976 -- # wait 60774 00:07:11.459 [2024-11-15 10:52:18.294715] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:11.459 [2024-11-15 10:52:18.294743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:11.460 [2024-11-15 10:52:18.313822] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:12.838 10:52:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:12.838 00:07:12.838 real 0m2.328s 00:07:12.838 user 0m2.486s 00:07:12.838 sys 0m0.351s 00:07:12.838 10:52:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:12.838 10:52:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.838 ************************************ 00:07:12.838 END TEST raid1_resize_test 00:07:12.838 ************************************ 00:07:12.838 10:52:19 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:12.838 10:52:19 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:12.838 10:52:19 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:12.838 10:52:19 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:12.838 10:52:19 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:12.838 10:52:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:12.838 ************************************ 00:07:12.838 START TEST raid_state_function_test 00:07:12.838 ************************************ 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 false 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:12.838 Process raid pid: 60837 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60837 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60837' 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60837 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 60837 ']' 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:12.838 10:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.838 [2024-11-15 10:52:19.626030] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:07:12.838 [2024-11-15 10:52:19.626162] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.097 [2024-11-15 10:52:19.781359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.097 [2024-11-15 10:52:19.902696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.356 [2024-11-15 10:52:20.127200] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.356 [2024-11-15 10:52:20.127248] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.617 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:13.617 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:07:13.618 10:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:13.618 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.618 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.618 [2024-11-15 10:52:20.497258] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:13.618 [2024-11-15 10:52:20.497330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:13.618 [2024-11-15 10:52:20.497342] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:13.618 [2024-11-15 10:52:20.497354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:13.618 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.618 10:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:13.618 10:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:13.618 10:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:13.618 10:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:13.618 10:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:13.618 10:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:13.618 10:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:13.618 10:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:13.618 10:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:13.618 10:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:13.618 10:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:13.618 10:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.618 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.618 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.618 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.618 10:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.618 "name": "Existed_Raid", 00:07:13.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.618 "strip_size_kb": 64, 00:07:13.618 "state": "configuring", 00:07:13.618 "raid_level": "raid0", 00:07:13.618 "superblock": false, 00:07:13.618 "num_base_bdevs": 2, 00:07:13.618 "num_base_bdevs_discovered": 0, 00:07:13.618 "num_base_bdevs_operational": 2, 00:07:13.618 "base_bdevs_list": [ 00:07:13.618 { 00:07:13.618 "name": "BaseBdev1", 00:07:13.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.618 "is_configured": false, 00:07:13.618 "data_offset": 0, 00:07:13.618 "data_size": 0 00:07:13.618 }, 00:07:13.618 { 00:07:13.618 "name": "BaseBdev2", 00:07:13.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.618 "is_configured": false, 00:07:13.618 "data_offset": 0, 00:07:13.618 "data_size": 0 00:07:13.618 } 00:07:13.618 ] 00:07:13.618 }' 00:07:13.618 10:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.618 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.189 10:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:14.189 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.189 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.189 [2024-11-15 10:52:20.920465] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:14.189 [2024-11-15 10:52:20.920561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:14.189 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.189 10:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:14.189 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.189 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.189 [2024-11-15 10:52:20.928444] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:14.189 [2024-11-15 10:52:20.928555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:14.189 [2024-11-15 10:52:20.928583] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:14.189 [2024-11-15 10:52:20.928609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:14.189 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.189 10:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:14.189 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.189 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.189 [2024-11-15 10:52:20.974007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:14.189 BaseBdev1 00:07:14.189 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.189 10:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:14.189 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:07:14.189 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:14.189 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:14.189 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:14.189 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:14.189 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:14.189 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.189 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.189 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.189 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:14.189 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.189 10:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.189 [ 00:07:14.189 { 00:07:14.189 "name": "BaseBdev1", 00:07:14.189 "aliases": [ 00:07:14.189 "c255e82b-29f4-4e49-99d5-9383d12b0265" 00:07:14.189 ], 00:07:14.189 "product_name": "Malloc disk", 00:07:14.189 "block_size": 512, 00:07:14.189 "num_blocks": 65536, 00:07:14.189 "uuid": "c255e82b-29f4-4e49-99d5-9383d12b0265", 00:07:14.189 "assigned_rate_limits": { 00:07:14.189 "rw_ios_per_sec": 0, 00:07:14.189 "rw_mbytes_per_sec": 0, 00:07:14.189 "r_mbytes_per_sec": 0, 00:07:14.189 "w_mbytes_per_sec": 0 00:07:14.189 }, 00:07:14.189 "claimed": true, 00:07:14.189 "claim_type": "exclusive_write", 00:07:14.189 "zoned": false, 00:07:14.189 "supported_io_types": { 00:07:14.189 "read": true, 00:07:14.189 "write": true, 00:07:14.189 "unmap": true, 00:07:14.189 "flush": true, 00:07:14.189 "reset": true, 00:07:14.189 "nvme_admin": false, 00:07:14.189 "nvme_io": false, 00:07:14.189 "nvme_io_md": false, 00:07:14.189 "write_zeroes": true, 00:07:14.189 "zcopy": true, 00:07:14.189 "get_zone_info": false, 00:07:14.189 "zone_management": false, 00:07:14.189 "zone_append": false, 00:07:14.189 "compare": false, 00:07:14.189 "compare_and_write": false, 00:07:14.189 "abort": true, 00:07:14.189 "seek_hole": false, 00:07:14.189 "seek_data": false, 00:07:14.189 "copy": true, 00:07:14.189 "nvme_iov_md": false 00:07:14.189 }, 00:07:14.189 "memory_domains": [ 00:07:14.189 { 00:07:14.189 "dma_device_id": "system", 00:07:14.189 "dma_device_type": 1 00:07:14.189 }, 00:07:14.189 { 00:07:14.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.189 "dma_device_type": 2 00:07:14.189 } 00:07:14.189 ], 00:07:14.189 "driver_specific": {} 00:07:14.189 } 00:07:14.189 ] 00:07:14.189 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.189 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:14.189 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:14.189 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.189 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:14.189 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:14.189 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.189 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.189 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.189 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.189 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.189 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.189 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.189 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.189 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.189 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.189 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.189 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.189 "name": "Existed_Raid", 00:07:14.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.189 "strip_size_kb": 64, 00:07:14.189 "state": "configuring", 00:07:14.189 "raid_level": "raid0", 00:07:14.189 "superblock": false, 00:07:14.189 "num_base_bdevs": 2, 00:07:14.189 "num_base_bdevs_discovered": 1, 00:07:14.189 "num_base_bdevs_operational": 2, 00:07:14.189 "base_bdevs_list": [ 00:07:14.189 { 00:07:14.189 "name": "BaseBdev1", 00:07:14.189 "uuid": "c255e82b-29f4-4e49-99d5-9383d12b0265", 00:07:14.189 "is_configured": true, 00:07:14.189 "data_offset": 0, 00:07:14.189 "data_size": 65536 00:07:14.189 }, 00:07:14.189 { 00:07:14.189 "name": "BaseBdev2", 00:07:14.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.189 "is_configured": false, 00:07:14.189 "data_offset": 0, 00:07:14.189 "data_size": 0 00:07:14.189 } 00:07:14.189 ] 00:07:14.189 }' 00:07:14.189 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.189 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.759 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:14.759 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.759 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.759 [2024-11-15 10:52:21.453227] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:14.759 [2024-11-15 10:52:21.453290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:14.759 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.759 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:14.759 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.759 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.759 [2024-11-15 10:52:21.461248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:14.759 [2024-11-15 10:52:21.463089] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:14.759 [2024-11-15 10:52:21.463179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:14.759 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.759 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:14.759 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:14.759 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:14.759 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.759 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:14.759 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:14.759 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.759 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.759 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.759 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.759 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.759 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.760 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.760 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.760 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.760 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.760 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.760 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.760 "name": "Existed_Raid", 00:07:14.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.760 "strip_size_kb": 64, 00:07:14.760 "state": "configuring", 00:07:14.760 "raid_level": "raid0", 00:07:14.760 "superblock": false, 00:07:14.760 "num_base_bdevs": 2, 00:07:14.760 "num_base_bdevs_discovered": 1, 00:07:14.760 "num_base_bdevs_operational": 2, 00:07:14.760 "base_bdevs_list": [ 00:07:14.760 { 00:07:14.760 "name": "BaseBdev1", 00:07:14.760 "uuid": "c255e82b-29f4-4e49-99d5-9383d12b0265", 00:07:14.760 "is_configured": true, 00:07:14.760 "data_offset": 0, 00:07:14.760 "data_size": 65536 00:07:14.760 }, 00:07:14.760 { 00:07:14.760 "name": "BaseBdev2", 00:07:14.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.760 "is_configured": false, 00:07:14.760 "data_offset": 0, 00:07:14.760 "data_size": 0 00:07:14.760 } 00:07:14.760 ] 00:07:14.760 }' 00:07:14.760 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.760 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.019 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:15.019 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.019 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.019 [2024-11-15 10:52:21.917423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:15.019 [2024-11-15 10:52:21.917533] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:15.019 [2024-11-15 10:52:21.917548] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:15.019 [2024-11-15 10:52:21.917841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:15.019 [2024-11-15 10:52:21.918021] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:15.019 [2024-11-15 10:52:21.918037] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:15.019 [2024-11-15 10:52:21.918333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.019 BaseBdev2 00:07:15.020 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.020 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:15.020 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:07:15.020 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:15.020 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:15.020 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:15.020 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:15.020 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:15.020 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.020 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.020 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.020 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:15.020 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.020 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.020 [ 00:07:15.020 { 00:07:15.020 "name": "BaseBdev2", 00:07:15.020 "aliases": [ 00:07:15.020 "f130a4ab-f9b8-4e85-9fbd-6ad5c1e52994" 00:07:15.020 ], 00:07:15.020 "product_name": "Malloc disk", 00:07:15.020 "block_size": 512, 00:07:15.020 "num_blocks": 65536, 00:07:15.020 "uuid": "f130a4ab-f9b8-4e85-9fbd-6ad5c1e52994", 00:07:15.020 "assigned_rate_limits": { 00:07:15.020 "rw_ios_per_sec": 0, 00:07:15.020 "rw_mbytes_per_sec": 0, 00:07:15.020 "r_mbytes_per_sec": 0, 00:07:15.020 "w_mbytes_per_sec": 0 00:07:15.020 }, 00:07:15.020 "claimed": true, 00:07:15.020 "claim_type": "exclusive_write", 00:07:15.020 "zoned": false, 00:07:15.020 "supported_io_types": { 00:07:15.020 "read": true, 00:07:15.020 "write": true, 00:07:15.020 "unmap": true, 00:07:15.020 "flush": true, 00:07:15.020 "reset": true, 00:07:15.020 "nvme_admin": false, 00:07:15.020 "nvme_io": false, 00:07:15.020 "nvme_io_md": false, 00:07:15.020 "write_zeroes": true, 00:07:15.020 "zcopy": true, 00:07:15.020 "get_zone_info": false, 00:07:15.280 "zone_management": false, 00:07:15.280 "zone_append": false, 00:07:15.280 "compare": false, 00:07:15.280 "compare_and_write": false, 00:07:15.280 "abort": true, 00:07:15.280 "seek_hole": false, 00:07:15.280 "seek_data": false, 00:07:15.280 "copy": true, 00:07:15.280 "nvme_iov_md": false 00:07:15.280 }, 00:07:15.280 "memory_domains": [ 00:07:15.280 { 00:07:15.280 "dma_device_id": "system", 00:07:15.280 "dma_device_type": 1 00:07:15.280 }, 00:07:15.280 { 00:07:15.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.280 "dma_device_type": 2 00:07:15.280 } 00:07:15.280 ], 00:07:15.280 "driver_specific": {} 00:07:15.280 } 00:07:15.280 ] 00:07:15.280 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.280 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:15.280 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:15.280 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:15.280 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:15.280 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.280 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:15.280 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:15.280 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.280 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.280 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.280 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.280 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.280 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.280 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.280 10:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.280 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.280 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.280 10:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.280 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.280 "name": "Existed_Raid", 00:07:15.280 "uuid": "e49cea38-8c4c-4b13-a821-fe285c843798", 00:07:15.280 "strip_size_kb": 64, 00:07:15.280 "state": "online", 00:07:15.280 "raid_level": "raid0", 00:07:15.280 "superblock": false, 00:07:15.280 "num_base_bdevs": 2, 00:07:15.280 "num_base_bdevs_discovered": 2, 00:07:15.280 "num_base_bdevs_operational": 2, 00:07:15.280 "base_bdevs_list": [ 00:07:15.280 { 00:07:15.280 "name": "BaseBdev1", 00:07:15.280 "uuid": "c255e82b-29f4-4e49-99d5-9383d12b0265", 00:07:15.280 "is_configured": true, 00:07:15.280 "data_offset": 0, 00:07:15.280 "data_size": 65536 00:07:15.280 }, 00:07:15.280 { 00:07:15.280 "name": "BaseBdev2", 00:07:15.280 "uuid": "f130a4ab-f9b8-4e85-9fbd-6ad5c1e52994", 00:07:15.280 "is_configured": true, 00:07:15.280 "data_offset": 0, 00:07:15.280 "data_size": 65536 00:07:15.280 } 00:07:15.280 ] 00:07:15.280 }' 00:07:15.280 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.280 10:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.540 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:15.540 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:15.540 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:15.540 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:15.540 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:15.540 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:15.540 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:15.540 10:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.540 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:15.540 10:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.540 [2024-11-15 10:52:22.440927] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.540 10:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.800 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:15.800 "name": "Existed_Raid", 00:07:15.800 "aliases": [ 00:07:15.800 "e49cea38-8c4c-4b13-a821-fe285c843798" 00:07:15.800 ], 00:07:15.800 "product_name": "Raid Volume", 00:07:15.800 "block_size": 512, 00:07:15.800 "num_blocks": 131072, 00:07:15.800 "uuid": "e49cea38-8c4c-4b13-a821-fe285c843798", 00:07:15.800 "assigned_rate_limits": { 00:07:15.800 "rw_ios_per_sec": 0, 00:07:15.800 "rw_mbytes_per_sec": 0, 00:07:15.800 "r_mbytes_per_sec": 0, 00:07:15.800 "w_mbytes_per_sec": 0 00:07:15.800 }, 00:07:15.800 "claimed": false, 00:07:15.800 "zoned": false, 00:07:15.800 "supported_io_types": { 00:07:15.800 "read": true, 00:07:15.800 "write": true, 00:07:15.800 "unmap": true, 00:07:15.800 "flush": true, 00:07:15.800 "reset": true, 00:07:15.800 "nvme_admin": false, 00:07:15.800 "nvme_io": false, 00:07:15.800 "nvme_io_md": false, 00:07:15.800 "write_zeroes": true, 00:07:15.800 "zcopy": false, 00:07:15.800 "get_zone_info": false, 00:07:15.800 "zone_management": false, 00:07:15.800 "zone_append": false, 00:07:15.800 "compare": false, 00:07:15.800 "compare_and_write": false, 00:07:15.800 "abort": false, 00:07:15.800 "seek_hole": false, 00:07:15.800 "seek_data": false, 00:07:15.800 "copy": false, 00:07:15.800 "nvme_iov_md": false 00:07:15.800 }, 00:07:15.800 "memory_domains": [ 00:07:15.800 { 00:07:15.800 "dma_device_id": "system", 00:07:15.800 "dma_device_type": 1 00:07:15.800 }, 00:07:15.800 { 00:07:15.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.800 "dma_device_type": 2 00:07:15.800 }, 00:07:15.800 { 00:07:15.800 "dma_device_id": "system", 00:07:15.800 "dma_device_type": 1 00:07:15.800 }, 00:07:15.800 { 00:07:15.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.800 "dma_device_type": 2 00:07:15.800 } 00:07:15.800 ], 00:07:15.800 "driver_specific": { 00:07:15.800 "raid": { 00:07:15.800 "uuid": "e49cea38-8c4c-4b13-a821-fe285c843798", 00:07:15.800 "strip_size_kb": 64, 00:07:15.800 "state": "online", 00:07:15.800 "raid_level": "raid0", 00:07:15.800 "superblock": false, 00:07:15.800 "num_base_bdevs": 2, 00:07:15.800 "num_base_bdevs_discovered": 2, 00:07:15.800 "num_base_bdevs_operational": 2, 00:07:15.800 "base_bdevs_list": [ 00:07:15.800 { 00:07:15.800 "name": "BaseBdev1", 00:07:15.800 "uuid": "c255e82b-29f4-4e49-99d5-9383d12b0265", 00:07:15.800 "is_configured": true, 00:07:15.800 "data_offset": 0, 00:07:15.800 "data_size": 65536 00:07:15.800 }, 00:07:15.800 { 00:07:15.800 "name": "BaseBdev2", 00:07:15.800 "uuid": "f130a4ab-f9b8-4e85-9fbd-6ad5c1e52994", 00:07:15.800 "is_configured": true, 00:07:15.800 "data_offset": 0, 00:07:15.800 "data_size": 65536 00:07:15.800 } 00:07:15.800 ] 00:07:15.801 } 00:07:15.801 } 00:07:15.801 }' 00:07:15.801 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:15.801 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:15.801 BaseBdev2' 00:07:15.801 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.801 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:15.801 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:15.801 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:15.801 10:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.801 10:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.801 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.801 10:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.801 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:15.801 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:15.801 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:15.801 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.801 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:15.801 10:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.801 10:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.801 10:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.801 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:15.801 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:15.801 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:15.801 10:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.801 10:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.801 [2024-11-15 10:52:22.684291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:15.801 [2024-11-15 10:52:22.684345] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:15.801 [2024-11-15 10:52:22.684401] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:16.064 10:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.064 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:16.064 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:16.064 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:16.064 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:16.064 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:16.064 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:16.064 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:16.064 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:16.064 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:16.064 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.064 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:16.064 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.064 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.064 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.064 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.064 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.064 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.064 10:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.064 10:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.064 10:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.064 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.064 "name": "Existed_Raid", 00:07:16.064 "uuid": "e49cea38-8c4c-4b13-a821-fe285c843798", 00:07:16.064 "strip_size_kb": 64, 00:07:16.064 "state": "offline", 00:07:16.064 "raid_level": "raid0", 00:07:16.064 "superblock": false, 00:07:16.064 "num_base_bdevs": 2, 00:07:16.064 "num_base_bdevs_discovered": 1, 00:07:16.064 "num_base_bdevs_operational": 1, 00:07:16.064 "base_bdevs_list": [ 00:07:16.064 { 00:07:16.064 "name": null, 00:07:16.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.064 "is_configured": false, 00:07:16.064 "data_offset": 0, 00:07:16.064 "data_size": 65536 00:07:16.064 }, 00:07:16.064 { 00:07:16.064 "name": "BaseBdev2", 00:07:16.064 "uuid": "f130a4ab-f9b8-4e85-9fbd-6ad5c1e52994", 00:07:16.064 "is_configured": true, 00:07:16.064 "data_offset": 0, 00:07:16.064 "data_size": 65536 00:07:16.064 } 00:07:16.064 ] 00:07:16.064 }' 00:07:16.064 10:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.064 10:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.324 10:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:16.324 10:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:16.324 10:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.324 10:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:16.324 10:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.324 10:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.584 10:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.584 10:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:16.584 10:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:16.584 10:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:16.584 10:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.584 10:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.584 [2024-11-15 10:52:23.283470] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:16.584 [2024-11-15 10:52:23.283534] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:16.584 10:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.584 10:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:16.584 10:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:16.584 10:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:16.584 10:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.584 10:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.584 10:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.584 10:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.584 10:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:16.584 10:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:16.584 10:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:16.584 10:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60837 00:07:16.584 10:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 60837 ']' 00:07:16.584 10:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 60837 00:07:16.584 10:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:07:16.584 10:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:16.584 10:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60837 00:07:16.584 killing process with pid 60837 00:07:16.584 10:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:16.584 10:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:16.584 10:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60837' 00:07:16.584 10:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 60837 00:07:16.584 [2024-11-15 10:52:23.464415] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:16.584 10:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 60837 00:07:16.584 [2024-11-15 10:52:23.482192] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:17.972 10:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:17.972 00:07:17.972 real 0m5.128s 00:07:17.972 user 0m7.397s 00:07:17.972 sys 0m0.817s 00:07:17.972 ************************************ 00:07:17.972 END TEST raid_state_function_test 00:07:17.972 ************************************ 00:07:17.972 10:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:17.972 10:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.972 10:52:24 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:17.973 10:52:24 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:17.973 10:52:24 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:17.973 10:52:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:17.973 ************************************ 00:07:17.973 START TEST raid_state_function_test_sb 00:07:17.973 ************************************ 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 true 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61090 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:17.973 Process raid pid: 61090 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61090' 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61090 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 61090 ']' 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:17.973 10:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.973 [2024-11-15 10:52:24.819404] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:07:17.973 [2024-11-15 10:52:24.819634] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.234 [2024-11-15 10:52:24.996332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.234 [2024-11-15 10:52:25.112556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.493 [2024-11-15 10:52:25.324335] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.493 [2024-11-15 10:52:25.324382] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.063 10:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:19.063 10:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:07:19.063 10:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:19.063 10:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.063 10:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.063 [2024-11-15 10:52:25.682287] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:19.063 [2024-11-15 10:52:25.682379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:19.063 [2024-11-15 10:52:25.682392] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:19.063 [2024-11-15 10:52:25.682403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:19.063 10:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.063 10:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:19.063 10:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.063 10:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:19.063 10:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:19.063 10:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.063 10:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.063 10:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.063 10:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.063 10:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.063 10:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.063 10:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.063 10:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.063 10:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.063 10:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.063 10:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.063 10:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.063 "name": "Existed_Raid", 00:07:19.063 "uuid": "39c1723a-02fd-47b7-b80b-c0f8c5dfb587", 00:07:19.063 "strip_size_kb": 64, 00:07:19.063 "state": "configuring", 00:07:19.063 "raid_level": "raid0", 00:07:19.063 "superblock": true, 00:07:19.063 "num_base_bdevs": 2, 00:07:19.063 "num_base_bdevs_discovered": 0, 00:07:19.063 "num_base_bdevs_operational": 2, 00:07:19.063 "base_bdevs_list": [ 00:07:19.063 { 00:07:19.063 "name": "BaseBdev1", 00:07:19.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.063 "is_configured": false, 00:07:19.063 "data_offset": 0, 00:07:19.063 "data_size": 0 00:07:19.063 }, 00:07:19.063 { 00:07:19.063 "name": "BaseBdev2", 00:07:19.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.063 "is_configured": false, 00:07:19.063 "data_offset": 0, 00:07:19.063 "data_size": 0 00:07:19.063 } 00:07:19.063 ] 00:07:19.063 }' 00:07:19.063 10:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.063 10:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.323 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:19.323 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.323 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.323 [2024-11-15 10:52:26.129498] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:19.323 [2024-11-15 10:52:26.129557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:19.323 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.323 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.324 [2024-11-15 10:52:26.137442] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:19.324 [2024-11-15 10:52:26.137485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:19.324 [2024-11-15 10:52:26.137495] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:19.324 [2024-11-15 10:52:26.137508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.324 BaseBdev1 00:07:19.324 [2024-11-15 10:52:26.188408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.324 [ 00:07:19.324 { 00:07:19.324 "name": "BaseBdev1", 00:07:19.324 "aliases": [ 00:07:19.324 "e36e99ca-3193-471f-9768-6af50ccedc2b" 00:07:19.324 ], 00:07:19.324 "product_name": "Malloc disk", 00:07:19.324 "block_size": 512, 00:07:19.324 "num_blocks": 65536, 00:07:19.324 "uuid": "e36e99ca-3193-471f-9768-6af50ccedc2b", 00:07:19.324 "assigned_rate_limits": { 00:07:19.324 "rw_ios_per_sec": 0, 00:07:19.324 "rw_mbytes_per_sec": 0, 00:07:19.324 "r_mbytes_per_sec": 0, 00:07:19.324 "w_mbytes_per_sec": 0 00:07:19.324 }, 00:07:19.324 "claimed": true, 00:07:19.324 "claim_type": "exclusive_write", 00:07:19.324 "zoned": false, 00:07:19.324 "supported_io_types": { 00:07:19.324 "read": true, 00:07:19.324 "write": true, 00:07:19.324 "unmap": true, 00:07:19.324 "flush": true, 00:07:19.324 "reset": true, 00:07:19.324 "nvme_admin": false, 00:07:19.324 "nvme_io": false, 00:07:19.324 "nvme_io_md": false, 00:07:19.324 "write_zeroes": true, 00:07:19.324 "zcopy": true, 00:07:19.324 "get_zone_info": false, 00:07:19.324 "zone_management": false, 00:07:19.324 "zone_append": false, 00:07:19.324 "compare": false, 00:07:19.324 "compare_and_write": false, 00:07:19.324 "abort": true, 00:07:19.324 "seek_hole": false, 00:07:19.324 "seek_data": false, 00:07:19.324 "copy": true, 00:07:19.324 "nvme_iov_md": false 00:07:19.324 }, 00:07:19.324 "memory_domains": [ 00:07:19.324 { 00:07:19.324 "dma_device_id": "system", 00:07:19.324 "dma_device_type": 1 00:07:19.324 }, 00:07:19.324 { 00:07:19.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.324 "dma_device_type": 2 00:07:19.324 } 00:07:19.324 ], 00:07:19.324 "driver_specific": {} 00:07:19.324 } 00:07:19.324 ] 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.324 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.583 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.583 "name": "Existed_Raid", 00:07:19.583 "uuid": "60f763af-a02d-4f77-805c-4d6eb68389e6", 00:07:19.583 "strip_size_kb": 64, 00:07:19.583 "state": "configuring", 00:07:19.583 "raid_level": "raid0", 00:07:19.584 "superblock": true, 00:07:19.584 "num_base_bdevs": 2, 00:07:19.584 "num_base_bdevs_discovered": 1, 00:07:19.584 "num_base_bdevs_operational": 2, 00:07:19.584 "base_bdevs_list": [ 00:07:19.584 { 00:07:19.584 "name": "BaseBdev1", 00:07:19.584 "uuid": "e36e99ca-3193-471f-9768-6af50ccedc2b", 00:07:19.584 "is_configured": true, 00:07:19.584 "data_offset": 2048, 00:07:19.584 "data_size": 63488 00:07:19.584 }, 00:07:19.584 { 00:07:19.584 "name": "BaseBdev2", 00:07:19.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.584 "is_configured": false, 00:07:19.584 "data_offset": 0, 00:07:19.584 "data_size": 0 00:07:19.584 } 00:07:19.584 ] 00:07:19.584 }' 00:07:19.584 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.584 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.844 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:19.844 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.844 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.844 [2024-11-15 10:52:26.635732] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:19.844 [2024-11-15 10:52:26.635815] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:19.844 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.844 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:19.844 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.844 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.844 [2024-11-15 10:52:26.643790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:19.844 [2024-11-15 10:52:26.646051] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:19.844 [2024-11-15 10:52:26.646097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:19.844 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.844 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:19.844 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:19.844 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:19.844 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.844 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:19.844 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:19.844 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.844 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.844 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.844 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.844 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.844 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.844 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.844 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.844 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.844 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.844 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.844 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.844 "name": "Existed_Raid", 00:07:19.844 "uuid": "500558e6-8a43-4c83-bca9-e50ee561d70c", 00:07:19.844 "strip_size_kb": 64, 00:07:19.844 "state": "configuring", 00:07:19.844 "raid_level": "raid0", 00:07:19.844 "superblock": true, 00:07:19.844 "num_base_bdevs": 2, 00:07:19.844 "num_base_bdevs_discovered": 1, 00:07:19.844 "num_base_bdevs_operational": 2, 00:07:19.844 "base_bdevs_list": [ 00:07:19.844 { 00:07:19.844 "name": "BaseBdev1", 00:07:19.844 "uuid": "e36e99ca-3193-471f-9768-6af50ccedc2b", 00:07:19.844 "is_configured": true, 00:07:19.844 "data_offset": 2048, 00:07:19.844 "data_size": 63488 00:07:19.844 }, 00:07:19.844 { 00:07:19.844 "name": "BaseBdev2", 00:07:19.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.844 "is_configured": false, 00:07:19.844 "data_offset": 0, 00:07:19.844 "data_size": 0 00:07:19.844 } 00:07:19.844 ] 00:07:19.844 }' 00:07:19.844 10:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.844 10:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.413 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:20.413 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.413 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.413 [2024-11-15 10:52:27.177338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:20.413 [2024-11-15 10:52:27.177644] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:20.413 [2024-11-15 10:52:27.177664] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:20.413 [2024-11-15 10:52:27.177975] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:20.413 BaseBdev2 00:07:20.413 [2024-11-15 10:52:27.178139] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:20.413 [2024-11-15 10:52:27.178153] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:20.413 [2024-11-15 10:52:27.178324] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.413 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.413 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:20.413 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:07:20.413 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:20.413 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:20.413 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:20.413 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:20.414 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:20.414 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.414 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.414 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.414 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:20.414 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.414 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.414 [ 00:07:20.414 { 00:07:20.414 "name": "BaseBdev2", 00:07:20.414 "aliases": [ 00:07:20.414 "0be9994e-3fd8-441a-a035-dd63bbec8fc6" 00:07:20.414 ], 00:07:20.414 "product_name": "Malloc disk", 00:07:20.414 "block_size": 512, 00:07:20.414 "num_blocks": 65536, 00:07:20.414 "uuid": "0be9994e-3fd8-441a-a035-dd63bbec8fc6", 00:07:20.414 "assigned_rate_limits": { 00:07:20.414 "rw_ios_per_sec": 0, 00:07:20.414 "rw_mbytes_per_sec": 0, 00:07:20.414 "r_mbytes_per_sec": 0, 00:07:20.414 "w_mbytes_per_sec": 0 00:07:20.414 }, 00:07:20.414 "claimed": true, 00:07:20.414 "claim_type": "exclusive_write", 00:07:20.414 "zoned": false, 00:07:20.414 "supported_io_types": { 00:07:20.414 "read": true, 00:07:20.414 "write": true, 00:07:20.414 "unmap": true, 00:07:20.414 "flush": true, 00:07:20.414 "reset": true, 00:07:20.414 "nvme_admin": false, 00:07:20.414 "nvme_io": false, 00:07:20.414 "nvme_io_md": false, 00:07:20.414 "write_zeroes": true, 00:07:20.414 "zcopy": true, 00:07:20.414 "get_zone_info": false, 00:07:20.414 "zone_management": false, 00:07:20.414 "zone_append": false, 00:07:20.414 "compare": false, 00:07:20.414 "compare_and_write": false, 00:07:20.414 "abort": true, 00:07:20.414 "seek_hole": false, 00:07:20.414 "seek_data": false, 00:07:20.414 "copy": true, 00:07:20.414 "nvme_iov_md": false 00:07:20.414 }, 00:07:20.414 "memory_domains": [ 00:07:20.414 { 00:07:20.414 "dma_device_id": "system", 00:07:20.414 "dma_device_type": 1 00:07:20.414 }, 00:07:20.414 { 00:07:20.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.414 "dma_device_type": 2 00:07:20.414 } 00:07:20.414 ], 00:07:20.414 "driver_specific": {} 00:07:20.414 } 00:07:20.414 ] 00:07:20.414 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.414 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:20.414 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:20.414 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:20.414 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:20.414 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.414 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:20.414 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:20.414 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.414 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.414 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.414 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.414 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.414 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.414 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.414 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.414 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.414 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.414 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.414 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.414 "name": "Existed_Raid", 00:07:20.414 "uuid": "500558e6-8a43-4c83-bca9-e50ee561d70c", 00:07:20.414 "strip_size_kb": 64, 00:07:20.414 "state": "online", 00:07:20.414 "raid_level": "raid0", 00:07:20.414 "superblock": true, 00:07:20.414 "num_base_bdevs": 2, 00:07:20.414 "num_base_bdevs_discovered": 2, 00:07:20.414 "num_base_bdevs_operational": 2, 00:07:20.414 "base_bdevs_list": [ 00:07:20.414 { 00:07:20.414 "name": "BaseBdev1", 00:07:20.414 "uuid": "e36e99ca-3193-471f-9768-6af50ccedc2b", 00:07:20.414 "is_configured": true, 00:07:20.414 "data_offset": 2048, 00:07:20.414 "data_size": 63488 00:07:20.414 }, 00:07:20.414 { 00:07:20.414 "name": "BaseBdev2", 00:07:20.414 "uuid": "0be9994e-3fd8-441a-a035-dd63bbec8fc6", 00:07:20.414 "is_configured": true, 00:07:20.414 "data_offset": 2048, 00:07:20.414 "data_size": 63488 00:07:20.414 } 00:07:20.414 ] 00:07:20.414 }' 00:07:20.414 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.414 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.985 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:20.985 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:20.985 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:20.985 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:20.985 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:20.985 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:20.985 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:20.985 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:20.985 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.985 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.985 [2024-11-15 10:52:27.648928] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:20.985 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.985 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:20.985 "name": "Existed_Raid", 00:07:20.985 "aliases": [ 00:07:20.985 "500558e6-8a43-4c83-bca9-e50ee561d70c" 00:07:20.985 ], 00:07:20.985 "product_name": "Raid Volume", 00:07:20.985 "block_size": 512, 00:07:20.985 "num_blocks": 126976, 00:07:20.985 "uuid": "500558e6-8a43-4c83-bca9-e50ee561d70c", 00:07:20.985 "assigned_rate_limits": { 00:07:20.985 "rw_ios_per_sec": 0, 00:07:20.985 "rw_mbytes_per_sec": 0, 00:07:20.985 "r_mbytes_per_sec": 0, 00:07:20.985 "w_mbytes_per_sec": 0 00:07:20.985 }, 00:07:20.985 "claimed": false, 00:07:20.985 "zoned": false, 00:07:20.985 "supported_io_types": { 00:07:20.985 "read": true, 00:07:20.985 "write": true, 00:07:20.985 "unmap": true, 00:07:20.985 "flush": true, 00:07:20.985 "reset": true, 00:07:20.985 "nvme_admin": false, 00:07:20.985 "nvme_io": false, 00:07:20.985 "nvme_io_md": false, 00:07:20.985 "write_zeroes": true, 00:07:20.985 "zcopy": false, 00:07:20.985 "get_zone_info": false, 00:07:20.985 "zone_management": false, 00:07:20.985 "zone_append": false, 00:07:20.985 "compare": false, 00:07:20.985 "compare_and_write": false, 00:07:20.985 "abort": false, 00:07:20.985 "seek_hole": false, 00:07:20.985 "seek_data": false, 00:07:20.985 "copy": false, 00:07:20.985 "nvme_iov_md": false 00:07:20.985 }, 00:07:20.985 "memory_domains": [ 00:07:20.985 { 00:07:20.985 "dma_device_id": "system", 00:07:20.985 "dma_device_type": 1 00:07:20.985 }, 00:07:20.985 { 00:07:20.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.985 "dma_device_type": 2 00:07:20.985 }, 00:07:20.985 { 00:07:20.985 "dma_device_id": "system", 00:07:20.985 "dma_device_type": 1 00:07:20.985 }, 00:07:20.985 { 00:07:20.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.985 "dma_device_type": 2 00:07:20.985 } 00:07:20.985 ], 00:07:20.985 "driver_specific": { 00:07:20.985 "raid": { 00:07:20.985 "uuid": "500558e6-8a43-4c83-bca9-e50ee561d70c", 00:07:20.985 "strip_size_kb": 64, 00:07:20.985 "state": "online", 00:07:20.985 "raid_level": "raid0", 00:07:20.985 "superblock": true, 00:07:20.985 "num_base_bdevs": 2, 00:07:20.985 "num_base_bdevs_discovered": 2, 00:07:20.985 "num_base_bdevs_operational": 2, 00:07:20.986 "base_bdevs_list": [ 00:07:20.986 { 00:07:20.986 "name": "BaseBdev1", 00:07:20.986 "uuid": "e36e99ca-3193-471f-9768-6af50ccedc2b", 00:07:20.986 "is_configured": true, 00:07:20.986 "data_offset": 2048, 00:07:20.986 "data_size": 63488 00:07:20.986 }, 00:07:20.986 { 00:07:20.986 "name": "BaseBdev2", 00:07:20.986 "uuid": "0be9994e-3fd8-441a-a035-dd63bbec8fc6", 00:07:20.986 "is_configured": true, 00:07:20.986 "data_offset": 2048, 00:07:20.986 "data_size": 63488 00:07:20.986 } 00:07:20.986 ] 00:07:20.986 } 00:07:20.986 } 00:07:20.986 }' 00:07:20.986 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:20.986 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:20.986 BaseBdev2' 00:07:20.986 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:20.986 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:20.986 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:20.986 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:20.986 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.986 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.986 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:20.986 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.986 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:20.986 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:20.986 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:20.986 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:20.986 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:20.986 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.986 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.986 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.986 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:20.986 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:20.986 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:20.986 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.986 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.986 [2024-11-15 10:52:27.880339] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:20.986 [2024-11-15 10:52:27.880399] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:20.986 [2024-11-15 10:52:27.880464] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:21.246 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.246 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:21.246 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:21.246 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:21.246 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:21.246 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:21.246 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:21.246 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.246 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:21.246 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:21.246 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.246 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:21.246 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.246 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.246 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.246 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.246 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.246 10:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.246 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.246 10:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.246 10:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.246 10:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.246 "name": "Existed_Raid", 00:07:21.246 "uuid": "500558e6-8a43-4c83-bca9-e50ee561d70c", 00:07:21.246 "strip_size_kb": 64, 00:07:21.246 "state": "offline", 00:07:21.246 "raid_level": "raid0", 00:07:21.246 "superblock": true, 00:07:21.246 "num_base_bdevs": 2, 00:07:21.246 "num_base_bdevs_discovered": 1, 00:07:21.246 "num_base_bdevs_operational": 1, 00:07:21.246 "base_bdevs_list": [ 00:07:21.246 { 00:07:21.246 "name": null, 00:07:21.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.246 "is_configured": false, 00:07:21.246 "data_offset": 0, 00:07:21.246 "data_size": 63488 00:07:21.246 }, 00:07:21.246 { 00:07:21.246 "name": "BaseBdev2", 00:07:21.246 "uuid": "0be9994e-3fd8-441a-a035-dd63bbec8fc6", 00:07:21.246 "is_configured": true, 00:07:21.246 "data_offset": 2048, 00:07:21.246 "data_size": 63488 00:07:21.246 } 00:07:21.246 ] 00:07:21.246 }' 00:07:21.246 10:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.246 10:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.506 10:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:21.506 10:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:21.506 10:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.506 10:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.506 10:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.506 10:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:21.506 10:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.780 10:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:21.780 10:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:21.780 10:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:21.780 10:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.780 10:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.780 [2024-11-15 10:52:28.452796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:21.780 [2024-11-15 10:52:28.452883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:21.780 10:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.780 10:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:21.780 10:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:21.781 10:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:21.781 10:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.781 10:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.781 10:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.781 10:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.781 10:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:21.781 10:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:21.781 10:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:21.781 10:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61090 00:07:21.781 10:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 61090 ']' 00:07:21.781 10:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 61090 00:07:21.781 10:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:07:21.781 10:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:21.781 10:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61090 00:07:21.781 10:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:21.781 10:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:21.781 killing process with pid 61090 00:07:21.781 10:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61090' 00:07:21.781 10:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 61090 00:07:21.781 [2024-11-15 10:52:28.638938] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:21.781 10:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 61090 00:07:21.781 [2024-11-15 10:52:28.657693] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:23.183 10:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:23.183 00:07:23.183 real 0m5.201s 00:07:23.183 user 0m7.430s 00:07:23.183 sys 0m0.839s 00:07:23.183 10:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:23.183 10:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.183 ************************************ 00:07:23.183 END TEST raid_state_function_test_sb 00:07:23.183 ************************************ 00:07:23.183 10:52:29 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:23.183 10:52:29 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:23.183 10:52:29 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:23.183 10:52:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:23.183 ************************************ 00:07:23.183 START TEST raid_superblock_test 00:07:23.183 ************************************ 00:07:23.183 10:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 2 00:07:23.183 10:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:23.183 10:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:23.183 10:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:23.183 10:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:23.183 10:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:23.183 10:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:23.183 10:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:23.183 10:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:23.183 10:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:23.183 10:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:23.183 10:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:23.183 10:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:23.183 10:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:23.183 10:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:23.183 10:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:23.183 10:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:23.183 10:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61342 00:07:23.183 10:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61342 00:07:23.183 10:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:23.183 10:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 61342 ']' 00:07:23.183 10:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.183 10:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:23.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.183 10:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.183 10:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:23.183 10:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.183 [2024-11-15 10:52:30.080789] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:07:23.184 [2024-11-15 10:52:30.080924] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61342 ] 00:07:23.443 [2024-11-15 10:52:30.238408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.710 [2024-11-15 10:52:30.380658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.710 [2024-11-15 10:52:30.624061] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.710 [2024-11-15 10:52:30.624110] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.280 10:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:24.280 10:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:07:24.280 10:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:24.280 10:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:24.280 10:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:24.280 10:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:24.280 10:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:24.280 10:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:24.280 10:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:24.280 10:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:24.280 10:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:24.280 10:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.280 10:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.280 malloc1 00:07:24.280 10:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.280 10:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:24.280 10:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.280 10:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.280 [2024-11-15 10:52:30.997567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:24.280 [2024-11-15 10:52:30.997647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.280 [2024-11-15 10:52:30.997674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:24.280 [2024-11-15 10:52:30.997684] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.280 [2024-11-15 10:52:31.000214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.280 [2024-11-15 10:52:31.000251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:24.280 pt1 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.280 malloc2 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.280 [2024-11-15 10:52:31.060288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:24.280 [2024-11-15 10:52:31.060362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.280 [2024-11-15 10:52:31.060389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:24.280 [2024-11-15 10:52:31.060399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.280 [2024-11-15 10:52:31.062871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.280 [2024-11-15 10:52:31.062902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:24.280 pt2 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.280 [2024-11-15 10:52:31.072336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:24.280 [2024-11-15 10:52:31.074432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:24.280 [2024-11-15 10:52:31.074596] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:24.280 [2024-11-15 10:52:31.074609] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:24.280 [2024-11-15 10:52:31.074872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:24.280 [2024-11-15 10:52:31.075040] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:24.280 [2024-11-15 10:52:31.075058] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:24.280 [2024-11-15 10:52:31.075224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.280 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.280 "name": "raid_bdev1", 00:07:24.280 "uuid": "47d42a0e-14cb-4e3d-9b34-814ea5e99198", 00:07:24.280 "strip_size_kb": 64, 00:07:24.280 "state": "online", 00:07:24.280 "raid_level": "raid0", 00:07:24.280 "superblock": true, 00:07:24.280 "num_base_bdevs": 2, 00:07:24.280 "num_base_bdevs_discovered": 2, 00:07:24.280 "num_base_bdevs_operational": 2, 00:07:24.280 "base_bdevs_list": [ 00:07:24.280 { 00:07:24.280 "name": "pt1", 00:07:24.280 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:24.280 "is_configured": true, 00:07:24.280 "data_offset": 2048, 00:07:24.280 "data_size": 63488 00:07:24.280 }, 00:07:24.280 { 00:07:24.280 "name": "pt2", 00:07:24.281 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:24.281 "is_configured": true, 00:07:24.281 "data_offset": 2048, 00:07:24.281 "data_size": 63488 00:07:24.281 } 00:07:24.281 ] 00:07:24.281 }' 00:07:24.281 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.281 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.850 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:24.850 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:24.850 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:24.850 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:24.850 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:24.850 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:24.850 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:24.850 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:24.850 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.850 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.850 [2024-11-15 10:52:31.551857] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:24.850 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.850 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:24.850 "name": "raid_bdev1", 00:07:24.850 "aliases": [ 00:07:24.850 "47d42a0e-14cb-4e3d-9b34-814ea5e99198" 00:07:24.850 ], 00:07:24.850 "product_name": "Raid Volume", 00:07:24.850 "block_size": 512, 00:07:24.850 "num_blocks": 126976, 00:07:24.850 "uuid": "47d42a0e-14cb-4e3d-9b34-814ea5e99198", 00:07:24.850 "assigned_rate_limits": { 00:07:24.850 "rw_ios_per_sec": 0, 00:07:24.850 "rw_mbytes_per_sec": 0, 00:07:24.850 "r_mbytes_per_sec": 0, 00:07:24.850 "w_mbytes_per_sec": 0 00:07:24.850 }, 00:07:24.850 "claimed": false, 00:07:24.850 "zoned": false, 00:07:24.850 "supported_io_types": { 00:07:24.850 "read": true, 00:07:24.850 "write": true, 00:07:24.850 "unmap": true, 00:07:24.850 "flush": true, 00:07:24.850 "reset": true, 00:07:24.850 "nvme_admin": false, 00:07:24.850 "nvme_io": false, 00:07:24.850 "nvme_io_md": false, 00:07:24.850 "write_zeroes": true, 00:07:24.850 "zcopy": false, 00:07:24.850 "get_zone_info": false, 00:07:24.850 "zone_management": false, 00:07:24.850 "zone_append": false, 00:07:24.850 "compare": false, 00:07:24.850 "compare_and_write": false, 00:07:24.850 "abort": false, 00:07:24.850 "seek_hole": false, 00:07:24.850 "seek_data": false, 00:07:24.850 "copy": false, 00:07:24.850 "nvme_iov_md": false 00:07:24.850 }, 00:07:24.850 "memory_domains": [ 00:07:24.850 { 00:07:24.850 "dma_device_id": "system", 00:07:24.850 "dma_device_type": 1 00:07:24.850 }, 00:07:24.850 { 00:07:24.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.850 "dma_device_type": 2 00:07:24.850 }, 00:07:24.850 { 00:07:24.850 "dma_device_id": "system", 00:07:24.850 "dma_device_type": 1 00:07:24.850 }, 00:07:24.850 { 00:07:24.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.850 "dma_device_type": 2 00:07:24.850 } 00:07:24.850 ], 00:07:24.850 "driver_specific": { 00:07:24.850 "raid": { 00:07:24.850 "uuid": "47d42a0e-14cb-4e3d-9b34-814ea5e99198", 00:07:24.850 "strip_size_kb": 64, 00:07:24.850 "state": "online", 00:07:24.850 "raid_level": "raid0", 00:07:24.850 "superblock": true, 00:07:24.850 "num_base_bdevs": 2, 00:07:24.850 "num_base_bdevs_discovered": 2, 00:07:24.850 "num_base_bdevs_operational": 2, 00:07:24.850 "base_bdevs_list": [ 00:07:24.850 { 00:07:24.850 "name": "pt1", 00:07:24.850 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:24.850 "is_configured": true, 00:07:24.850 "data_offset": 2048, 00:07:24.850 "data_size": 63488 00:07:24.850 }, 00:07:24.850 { 00:07:24.850 "name": "pt2", 00:07:24.850 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:24.851 "is_configured": true, 00:07:24.851 "data_offset": 2048, 00:07:24.851 "data_size": 63488 00:07:24.851 } 00:07:24.851 ] 00:07:24.851 } 00:07:24.851 } 00:07:24.851 }' 00:07:24.851 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:24.851 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:24.851 pt2' 00:07:24.851 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.851 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:24.851 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:24.851 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:24.851 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.851 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.851 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.851 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.851 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:24.851 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:24.851 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:24.851 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:24.851 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.851 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.851 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.851 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.111 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:25.111 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:25.111 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:25.111 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:25.111 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.111 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.111 [2024-11-15 10:52:31.799435] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:25.111 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.111 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=47d42a0e-14cb-4e3d-9b34-814ea5e99198 00:07:25.111 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 47d42a0e-14cb-4e3d-9b34-814ea5e99198 ']' 00:07:25.111 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.112 [2024-11-15 10:52:31.843010] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:25.112 [2024-11-15 10:52:31.843054] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:25.112 [2024-11-15 10:52:31.843167] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:25.112 [2024-11-15 10:52:31.843224] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:25.112 [2024-11-15 10:52:31.843238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.112 [2024-11-15 10:52:31.974850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:25.112 [2024-11-15 10:52:31.977107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:25.112 [2024-11-15 10:52:31.977194] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:25.112 [2024-11-15 10:52:31.977253] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:25.112 [2024-11-15 10:52:31.977269] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:25.112 [2024-11-15 10:52:31.977282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:25.112 request: 00:07:25.112 { 00:07:25.112 "name": "raid_bdev1", 00:07:25.112 "raid_level": "raid0", 00:07:25.112 "base_bdevs": [ 00:07:25.112 "malloc1", 00:07:25.112 "malloc2" 00:07:25.112 ], 00:07:25.112 "strip_size_kb": 64, 00:07:25.112 "superblock": false, 00:07:25.112 "method": "bdev_raid_create", 00:07:25.112 "req_id": 1 00:07:25.112 } 00:07:25.112 Got JSON-RPC error response 00:07:25.112 response: 00:07:25.112 { 00:07:25.112 "code": -17, 00:07:25.112 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:25.112 } 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.112 10:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.112 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:25.112 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:25.112 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:25.112 10:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.112 10:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.112 [2024-11-15 10:52:32.034738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:25.112 [2024-11-15 10:52:32.034823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:25.112 [2024-11-15 10:52:32.034849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:25.112 [2024-11-15 10:52:32.034861] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:25.372 [2024-11-15 10:52:32.037661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:25.372 [2024-11-15 10:52:32.037703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:25.372 [2024-11-15 10:52:32.037811] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:25.372 [2024-11-15 10:52:32.037887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:25.372 pt1 00:07:25.372 10:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.372 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:25.372 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:25.372 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.372 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:25.372 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.372 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.372 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.372 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.372 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.372 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.372 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.372 10:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.372 10:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.372 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:25.372 10:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.372 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.372 "name": "raid_bdev1", 00:07:25.372 "uuid": "47d42a0e-14cb-4e3d-9b34-814ea5e99198", 00:07:25.372 "strip_size_kb": 64, 00:07:25.372 "state": "configuring", 00:07:25.372 "raid_level": "raid0", 00:07:25.372 "superblock": true, 00:07:25.372 "num_base_bdevs": 2, 00:07:25.372 "num_base_bdevs_discovered": 1, 00:07:25.372 "num_base_bdevs_operational": 2, 00:07:25.372 "base_bdevs_list": [ 00:07:25.372 { 00:07:25.372 "name": "pt1", 00:07:25.372 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:25.372 "is_configured": true, 00:07:25.372 "data_offset": 2048, 00:07:25.372 "data_size": 63488 00:07:25.372 }, 00:07:25.372 { 00:07:25.372 "name": null, 00:07:25.372 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:25.372 "is_configured": false, 00:07:25.372 "data_offset": 2048, 00:07:25.372 "data_size": 63488 00:07:25.372 } 00:07:25.372 ] 00:07:25.372 }' 00:07:25.372 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.372 10:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.632 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:25.632 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:25.633 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:25.633 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:25.633 10:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.633 10:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.633 [2024-11-15 10:52:32.446102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:25.633 [2024-11-15 10:52:32.446240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:25.633 [2024-11-15 10:52:32.446281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:25.633 [2024-11-15 10:52:32.446317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:25.633 [2024-11-15 10:52:32.447038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:25.633 [2024-11-15 10:52:32.447091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:25.633 [2024-11-15 10:52:32.447233] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:25.633 [2024-11-15 10:52:32.447285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:25.633 [2024-11-15 10:52:32.447480] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:25.633 [2024-11-15 10:52:32.447508] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:25.633 [2024-11-15 10:52:32.447836] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:25.633 [2024-11-15 10:52:32.448078] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:25.633 [2024-11-15 10:52:32.448103] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:25.633 [2024-11-15 10:52:32.448318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:25.633 pt2 00:07:25.633 10:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.633 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:25.633 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:25.633 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:25.633 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:25.633 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:25.633 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:25.633 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.633 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.633 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.633 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.633 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.633 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.633 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.633 10:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.633 10:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.633 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:25.633 10:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.633 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.633 "name": "raid_bdev1", 00:07:25.633 "uuid": "47d42a0e-14cb-4e3d-9b34-814ea5e99198", 00:07:25.633 "strip_size_kb": 64, 00:07:25.633 "state": "online", 00:07:25.633 "raid_level": "raid0", 00:07:25.633 "superblock": true, 00:07:25.633 "num_base_bdevs": 2, 00:07:25.633 "num_base_bdevs_discovered": 2, 00:07:25.633 "num_base_bdevs_operational": 2, 00:07:25.633 "base_bdevs_list": [ 00:07:25.633 { 00:07:25.633 "name": "pt1", 00:07:25.633 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:25.633 "is_configured": true, 00:07:25.633 "data_offset": 2048, 00:07:25.633 "data_size": 63488 00:07:25.633 }, 00:07:25.633 { 00:07:25.633 "name": "pt2", 00:07:25.633 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:25.633 "is_configured": true, 00:07:25.633 "data_offset": 2048, 00:07:25.633 "data_size": 63488 00:07:25.633 } 00:07:25.633 ] 00:07:25.633 }' 00:07:25.633 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.633 10:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.213 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:26.213 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:26.213 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:26.213 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:26.213 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:26.213 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:26.213 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:26.213 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:26.213 10:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.213 10:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.213 [2024-11-15 10:52:32.881601] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:26.213 10:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.213 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:26.213 "name": "raid_bdev1", 00:07:26.213 "aliases": [ 00:07:26.213 "47d42a0e-14cb-4e3d-9b34-814ea5e99198" 00:07:26.213 ], 00:07:26.213 "product_name": "Raid Volume", 00:07:26.213 "block_size": 512, 00:07:26.213 "num_blocks": 126976, 00:07:26.213 "uuid": "47d42a0e-14cb-4e3d-9b34-814ea5e99198", 00:07:26.213 "assigned_rate_limits": { 00:07:26.213 "rw_ios_per_sec": 0, 00:07:26.213 "rw_mbytes_per_sec": 0, 00:07:26.213 "r_mbytes_per_sec": 0, 00:07:26.213 "w_mbytes_per_sec": 0 00:07:26.213 }, 00:07:26.213 "claimed": false, 00:07:26.213 "zoned": false, 00:07:26.213 "supported_io_types": { 00:07:26.213 "read": true, 00:07:26.213 "write": true, 00:07:26.213 "unmap": true, 00:07:26.213 "flush": true, 00:07:26.213 "reset": true, 00:07:26.213 "nvme_admin": false, 00:07:26.213 "nvme_io": false, 00:07:26.213 "nvme_io_md": false, 00:07:26.213 "write_zeroes": true, 00:07:26.213 "zcopy": false, 00:07:26.213 "get_zone_info": false, 00:07:26.213 "zone_management": false, 00:07:26.213 "zone_append": false, 00:07:26.213 "compare": false, 00:07:26.213 "compare_and_write": false, 00:07:26.213 "abort": false, 00:07:26.214 "seek_hole": false, 00:07:26.214 "seek_data": false, 00:07:26.214 "copy": false, 00:07:26.214 "nvme_iov_md": false 00:07:26.214 }, 00:07:26.214 "memory_domains": [ 00:07:26.214 { 00:07:26.214 "dma_device_id": "system", 00:07:26.214 "dma_device_type": 1 00:07:26.214 }, 00:07:26.214 { 00:07:26.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.214 "dma_device_type": 2 00:07:26.214 }, 00:07:26.214 { 00:07:26.214 "dma_device_id": "system", 00:07:26.214 "dma_device_type": 1 00:07:26.214 }, 00:07:26.214 { 00:07:26.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.214 "dma_device_type": 2 00:07:26.214 } 00:07:26.214 ], 00:07:26.214 "driver_specific": { 00:07:26.214 "raid": { 00:07:26.214 "uuid": "47d42a0e-14cb-4e3d-9b34-814ea5e99198", 00:07:26.214 "strip_size_kb": 64, 00:07:26.214 "state": "online", 00:07:26.214 "raid_level": "raid0", 00:07:26.214 "superblock": true, 00:07:26.214 "num_base_bdevs": 2, 00:07:26.214 "num_base_bdevs_discovered": 2, 00:07:26.214 "num_base_bdevs_operational": 2, 00:07:26.214 "base_bdevs_list": [ 00:07:26.214 { 00:07:26.214 "name": "pt1", 00:07:26.214 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:26.214 "is_configured": true, 00:07:26.214 "data_offset": 2048, 00:07:26.214 "data_size": 63488 00:07:26.214 }, 00:07:26.214 { 00:07:26.214 "name": "pt2", 00:07:26.214 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:26.214 "is_configured": true, 00:07:26.214 "data_offset": 2048, 00:07:26.214 "data_size": 63488 00:07:26.214 } 00:07:26.214 ] 00:07:26.214 } 00:07:26.214 } 00:07:26.214 }' 00:07:26.214 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:26.214 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:26.214 pt2' 00:07:26.214 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.214 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:26.214 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:26.214 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.214 10:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:26.214 10:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.214 10:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.214 10:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.214 10:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:26.214 10:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:26.214 10:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:26.214 10:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:26.214 10:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.214 10:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.214 10:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.214 10:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.214 10:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:26.214 10:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:26.214 10:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:26.214 10:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.214 10:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.214 10:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:26.214 [2024-11-15 10:52:33.085203] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:26.214 10:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.214 10:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 47d42a0e-14cb-4e3d-9b34-814ea5e99198 '!=' 47d42a0e-14cb-4e3d-9b34-814ea5e99198 ']' 00:07:26.214 10:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:26.214 10:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:26.214 10:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:26.214 10:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61342 00:07:26.214 10:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 61342 ']' 00:07:26.214 10:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 61342 00:07:26.214 10:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:07:26.214 10:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:26.214 10:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61342 00:07:26.485 10:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:26.485 10:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:26.485 killing process with pid 61342 00:07:26.485 10:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61342' 00:07:26.485 10:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 61342 00:07:26.485 [2024-11-15 10:52:33.157217] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:26.485 [2024-11-15 10:52:33.157363] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:26.485 10:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 61342 00:07:26.485 [2024-11-15 10:52:33.157428] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:26.485 [2024-11-15 10:52:33.157444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:26.485 [2024-11-15 10:52:33.395126] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:27.865 10:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:27.865 00:07:27.865 real 0m4.669s 00:07:27.865 user 0m6.381s 00:07:27.865 sys 0m0.859s 00:07:27.865 10:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:27.865 10:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.865 ************************************ 00:07:27.865 END TEST raid_superblock_test 00:07:27.865 ************************************ 00:07:27.865 10:52:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:27.865 10:52:34 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:27.865 10:52:34 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:27.865 10:52:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:27.865 ************************************ 00:07:27.865 START TEST raid_read_error_test 00:07:27.865 ************************************ 00:07:27.865 10:52:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 read 00:07:27.865 10:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:27.865 10:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:27.865 10:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:27.865 10:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:27.865 10:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:27.865 10:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:27.865 10:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:27.865 10:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:27.865 10:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:27.865 10:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:27.865 10:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:27.865 10:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:27.865 10:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:27.865 10:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:27.865 10:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:27.865 10:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:27.865 10:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:27.865 10:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:27.865 10:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:27.865 10:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:27.865 10:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:27.865 10:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:27.865 10:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pdGGDulXAA 00:07:27.865 10:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:27.865 10:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61555 00:07:27.865 10:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61555 00:07:27.865 10:52:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 61555 ']' 00:07:27.865 10:52:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.866 10:52:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:27.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.866 10:52:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.866 10:52:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:27.866 10:52:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.126 [2024-11-15 10:52:34.808628] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:07:28.126 [2024-11-15 10:52:34.808745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61555 ] 00:07:28.126 [2024-11-15 10:52:34.968210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.386 [2024-11-15 10:52:35.114144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.646 [2024-11-15 10:52:35.364634] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.646 [2024-11-15 10:52:35.364723] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.906 10:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:28.906 10:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:07:28.906 10:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:28.906 10:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:28.906 10:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.906 10:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.906 BaseBdev1_malloc 00:07:28.906 10:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.906 10:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:28.906 10:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.906 10:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.906 true 00:07:28.906 10:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.906 10:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:28.906 10:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.906 10:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.906 [2024-11-15 10:52:35.753545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:28.906 [2024-11-15 10:52:35.753610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:28.906 [2024-11-15 10:52:35.753636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:28.906 [2024-11-15 10:52:35.753648] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:28.906 [2024-11-15 10:52:35.756423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:28.906 [2024-11-15 10:52:35.756464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:28.906 BaseBdev1 00:07:28.906 10:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.906 10:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:28.906 10:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:28.906 10:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.906 10:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.906 BaseBdev2_malloc 00:07:28.906 10:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.906 10:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:28.906 10:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.906 10:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.906 true 00:07:28.906 10:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.906 10:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:28.906 10:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.906 10:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.164 [2024-11-15 10:52:35.833204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:29.164 [2024-11-15 10:52:35.833275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.164 [2024-11-15 10:52:35.833297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:29.164 [2024-11-15 10:52:35.833329] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.164 [2024-11-15 10:52:35.836124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.164 [2024-11-15 10:52:35.836165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:29.164 BaseBdev2 00:07:29.164 10:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.164 10:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:29.164 10:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.164 10:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.164 [2024-11-15 10:52:35.845253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:29.164 [2024-11-15 10:52:35.847661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:29.164 [2024-11-15 10:52:35.847918] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:29.164 [2024-11-15 10:52:35.847946] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:29.164 [2024-11-15 10:52:35.848253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:29.164 [2024-11-15 10:52:35.848504] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:29.164 [2024-11-15 10:52:35.848525] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:29.164 [2024-11-15 10:52:35.848711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.164 10:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.164 10:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:29.164 10:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:29.164 10:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.164 10:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.164 10:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.164 10:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.164 10:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.164 10:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.164 10:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.164 10:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.164 10:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.164 10:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.164 10:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:29.164 10:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.164 10:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.164 10:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.164 "name": "raid_bdev1", 00:07:29.164 "uuid": "aef69889-83d3-4a35-8e69-35d4c9f3b757", 00:07:29.164 "strip_size_kb": 64, 00:07:29.164 "state": "online", 00:07:29.164 "raid_level": "raid0", 00:07:29.164 "superblock": true, 00:07:29.164 "num_base_bdevs": 2, 00:07:29.164 "num_base_bdevs_discovered": 2, 00:07:29.164 "num_base_bdevs_operational": 2, 00:07:29.164 "base_bdevs_list": [ 00:07:29.164 { 00:07:29.164 "name": "BaseBdev1", 00:07:29.164 "uuid": "9eb64307-7a64-5a27-8cb7-79e57b071bd8", 00:07:29.164 "is_configured": true, 00:07:29.164 "data_offset": 2048, 00:07:29.164 "data_size": 63488 00:07:29.164 }, 00:07:29.164 { 00:07:29.164 "name": "BaseBdev2", 00:07:29.164 "uuid": "a55a1826-e000-5d6f-9c3b-77d1c94962cc", 00:07:29.164 "is_configured": true, 00:07:29.164 "data_offset": 2048, 00:07:29.164 "data_size": 63488 00:07:29.164 } 00:07:29.164 ] 00:07:29.164 }' 00:07:29.164 10:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.164 10:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.421 10:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:29.421 10:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:29.680 [2024-11-15 10:52:36.361912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:30.616 10:52:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:30.616 10:52:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.616 10:52:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.616 10:52:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.616 10:52:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:30.616 10:52:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:30.616 10:52:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:30.616 10:52:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:30.616 10:52:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:30.616 10:52:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:30.616 10:52:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:30.616 10:52:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.616 10:52:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:30.616 10:52:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.616 10:52:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.616 10:52:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.616 10:52:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.616 10:52:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.616 10:52:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:30.616 10:52:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.616 10:52:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.616 10:52:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.616 10:52:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.616 "name": "raid_bdev1", 00:07:30.616 "uuid": "aef69889-83d3-4a35-8e69-35d4c9f3b757", 00:07:30.616 "strip_size_kb": 64, 00:07:30.616 "state": "online", 00:07:30.616 "raid_level": "raid0", 00:07:30.616 "superblock": true, 00:07:30.616 "num_base_bdevs": 2, 00:07:30.616 "num_base_bdevs_discovered": 2, 00:07:30.616 "num_base_bdevs_operational": 2, 00:07:30.616 "base_bdevs_list": [ 00:07:30.616 { 00:07:30.616 "name": "BaseBdev1", 00:07:30.616 "uuid": "9eb64307-7a64-5a27-8cb7-79e57b071bd8", 00:07:30.616 "is_configured": true, 00:07:30.616 "data_offset": 2048, 00:07:30.616 "data_size": 63488 00:07:30.616 }, 00:07:30.616 { 00:07:30.616 "name": "BaseBdev2", 00:07:30.616 "uuid": "a55a1826-e000-5d6f-9c3b-77d1c94962cc", 00:07:30.616 "is_configured": true, 00:07:30.616 "data_offset": 2048, 00:07:30.616 "data_size": 63488 00:07:30.616 } 00:07:30.616 ] 00:07:30.616 }' 00:07:30.616 10:52:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.616 10:52:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.874 10:52:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:30.874 10:52:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.874 10:52:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.874 [2024-11-15 10:52:37.723292] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:30.874 [2024-11-15 10:52:37.723365] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:30.874 [2024-11-15 10:52:37.726164] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:30.874 [2024-11-15 10:52:37.726216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.874 [2024-11-15 10:52:37.726253] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:30.874 [2024-11-15 10:52:37.726267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:30.874 { 00:07:30.874 "results": [ 00:07:30.874 { 00:07:30.874 "job": "raid_bdev1", 00:07:30.874 "core_mask": "0x1", 00:07:30.874 "workload": "randrw", 00:07:30.874 "percentage": 50, 00:07:30.874 "status": "finished", 00:07:30.874 "queue_depth": 1, 00:07:30.874 "io_size": 131072, 00:07:30.874 "runtime": 1.361837, 00:07:30.874 "iops": 12963.372268487345, 00:07:30.874 "mibps": 1620.421533560918, 00:07:30.874 "io_failed": 1, 00:07:30.874 "io_timeout": 0, 00:07:30.874 "avg_latency_us": 108.50308743889121, 00:07:30.874 "min_latency_us": 27.72401746724891, 00:07:30.874 "max_latency_us": 1874.5013100436681 00:07:30.874 } 00:07:30.874 ], 00:07:30.874 "core_count": 1 00:07:30.874 } 00:07:30.874 10:52:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.874 10:52:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61555 00:07:30.874 10:52:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 61555 ']' 00:07:30.874 10:52:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 61555 00:07:30.874 10:52:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:07:30.874 10:52:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:30.874 10:52:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61555 00:07:30.874 10:52:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:30.874 10:52:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:30.874 killing process with pid 61555 00:07:30.874 10:52:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61555' 00:07:30.874 10:52:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 61555 00:07:30.874 [2024-11-15 10:52:37.768011] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:30.874 10:52:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 61555 00:07:31.132 [2024-11-15 10:52:37.931785] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:32.511 10:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pdGGDulXAA 00:07:32.511 10:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:32.511 10:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:32.511 10:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:32.511 10:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:32.511 10:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:32.511 10:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:32.511 10:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:32.511 00:07:32.511 real 0m4.609s 00:07:32.511 user 0m5.379s 00:07:32.511 sys 0m0.627s 00:07:32.511 10:52:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:32.511 10:52:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.511 ************************************ 00:07:32.511 END TEST raid_read_error_test 00:07:32.511 ************************************ 00:07:32.511 10:52:39 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:32.511 10:52:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:32.511 10:52:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:32.511 10:52:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:32.511 ************************************ 00:07:32.511 START TEST raid_write_error_test 00:07:32.511 ************************************ 00:07:32.511 10:52:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 write 00:07:32.511 10:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:32.511 10:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:32.511 10:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:32.511 10:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:32.511 10:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:32.511 10:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:32.511 10:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:32.511 10:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:32.511 10:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:32.511 10:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:32.511 10:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:32.511 10:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:32.511 10:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:32.511 10:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:32.511 10:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:32.511 10:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:32.511 10:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:32.511 10:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:32.511 10:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:32.511 10:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:32.511 10:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:32.511 10:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:32.512 10:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LA83ekYbJB 00:07:32.512 10:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61699 00:07:32.512 10:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61699 00:07:32.512 10:52:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 61699 ']' 00:07:32.512 10:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:32.512 10:52:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.512 10:52:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:32.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.512 10:52:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.512 10:52:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:32.512 10:52:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.771 [2024-11-15 10:52:39.500617] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:07:32.771 [2024-11-15 10:52:39.500753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61699 ] 00:07:32.771 [2024-11-15 10:52:39.656573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.031 [2024-11-15 10:52:39.815533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.290 [2024-11-15 10:52:40.071801] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.290 [2024-11-15 10:52:40.071921] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.549 10:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:33.549 10:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:07:33.549 10:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:33.549 10:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:33.549 10:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.549 10:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.549 BaseBdev1_malloc 00:07:33.549 10:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.549 10:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:33.549 10:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.549 10:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.549 true 00:07:33.549 10:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.549 10:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:33.549 10:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.549 10:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.549 [2024-11-15 10:52:40.461213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:33.549 [2024-11-15 10:52:40.461281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.550 [2024-11-15 10:52:40.461315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:33.550 [2024-11-15 10:52:40.461328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.550 [2024-11-15 10:52:40.463830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.550 [2024-11-15 10:52:40.463879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:33.550 BaseBdev1 00:07:33.550 10:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.550 10:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:33.550 10:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:33.550 10:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.550 10:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.810 BaseBdev2_malloc 00:07:33.810 10:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.810 10:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:33.810 10:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.810 10:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.810 true 00:07:33.810 10:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.810 10:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:33.810 10:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.810 10:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.810 [2024-11-15 10:52:40.536936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:33.810 [2024-11-15 10:52:40.537007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.810 [2024-11-15 10:52:40.537040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:33.810 [2024-11-15 10:52:40.537052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.810 [2024-11-15 10:52:40.539535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.810 [2024-11-15 10:52:40.539572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:33.810 BaseBdev2 00:07:33.810 10:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.810 10:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:33.810 10:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.810 10:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.810 [2024-11-15 10:52:40.548985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:33.810 [2024-11-15 10:52:40.551132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:33.810 [2024-11-15 10:52:40.551352] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:33.810 [2024-11-15 10:52:40.551371] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:33.810 [2024-11-15 10:52:40.551629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:33.810 [2024-11-15 10:52:40.551836] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:33.810 [2024-11-15 10:52:40.551861] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:33.810 [2024-11-15 10:52:40.552033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.810 10:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.810 10:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:33.810 10:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:33.810 10:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.810 10:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:33.810 10:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.810 10:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.810 10:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.810 10:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.810 10:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.810 10:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.810 10:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.810 10:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.810 10:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.810 10:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:33.810 10:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.810 10:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.810 "name": "raid_bdev1", 00:07:33.810 "uuid": "31c414d3-a37c-4668-9af3-39e9bc63f453", 00:07:33.810 "strip_size_kb": 64, 00:07:33.810 "state": "online", 00:07:33.810 "raid_level": "raid0", 00:07:33.810 "superblock": true, 00:07:33.810 "num_base_bdevs": 2, 00:07:33.810 "num_base_bdevs_discovered": 2, 00:07:33.810 "num_base_bdevs_operational": 2, 00:07:33.810 "base_bdevs_list": [ 00:07:33.810 { 00:07:33.810 "name": "BaseBdev1", 00:07:33.810 "uuid": "4e815704-bb22-57ac-a9c0-f173e1c4b914", 00:07:33.810 "is_configured": true, 00:07:33.810 "data_offset": 2048, 00:07:33.810 "data_size": 63488 00:07:33.810 }, 00:07:33.810 { 00:07:33.810 "name": "BaseBdev2", 00:07:33.810 "uuid": "042332d0-0b26-599d-84ba-b545c04a17eb", 00:07:33.810 "is_configured": true, 00:07:33.810 "data_offset": 2048, 00:07:33.810 "data_size": 63488 00:07:33.810 } 00:07:33.810 ] 00:07:33.810 }' 00:07:33.811 10:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.811 10:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.070 10:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:34.070 10:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:34.330 [2024-11-15 10:52:41.066015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:35.268 10:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:35.268 10:52:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.268 10:52:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.268 10:52:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.269 10:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:35.269 10:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:35.269 10:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:35.269 10:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:35.269 10:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:35.269 10:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:35.269 10:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:35.269 10:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.269 10:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.269 10:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.269 10:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.269 10:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.269 10:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.269 10:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.269 10:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:35.269 10:52:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.269 10:52:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.269 10:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.269 10:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.269 "name": "raid_bdev1", 00:07:35.269 "uuid": "31c414d3-a37c-4668-9af3-39e9bc63f453", 00:07:35.269 "strip_size_kb": 64, 00:07:35.269 "state": "online", 00:07:35.269 "raid_level": "raid0", 00:07:35.269 "superblock": true, 00:07:35.269 "num_base_bdevs": 2, 00:07:35.269 "num_base_bdevs_discovered": 2, 00:07:35.269 "num_base_bdevs_operational": 2, 00:07:35.269 "base_bdevs_list": [ 00:07:35.269 { 00:07:35.269 "name": "BaseBdev1", 00:07:35.269 "uuid": "4e815704-bb22-57ac-a9c0-f173e1c4b914", 00:07:35.269 "is_configured": true, 00:07:35.269 "data_offset": 2048, 00:07:35.269 "data_size": 63488 00:07:35.269 }, 00:07:35.269 { 00:07:35.269 "name": "BaseBdev2", 00:07:35.269 "uuid": "042332d0-0b26-599d-84ba-b545c04a17eb", 00:07:35.269 "is_configured": true, 00:07:35.269 "data_offset": 2048, 00:07:35.269 "data_size": 63488 00:07:35.269 } 00:07:35.269 ] 00:07:35.269 }' 00:07:35.269 10:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.269 10:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.528 10:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:35.528 10:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.528 10:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.528 [2024-11-15 10:52:42.419721] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:35.528 [2024-11-15 10:52:42.419777] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:35.528 [2024-11-15 10:52:42.422565] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:35.528 [2024-11-15 10:52:42.422615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.528 [2024-11-15 10:52:42.422651] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:35.528 [2024-11-15 10:52:42.422664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:35.528 { 00:07:35.528 "results": [ 00:07:35.528 { 00:07:35.528 "job": "raid_bdev1", 00:07:35.528 "core_mask": "0x1", 00:07:35.528 "workload": "randrw", 00:07:35.528 "percentage": 50, 00:07:35.528 "status": "finished", 00:07:35.528 "queue_depth": 1, 00:07:35.528 "io_size": 131072, 00:07:35.528 "runtime": 1.353982, 00:07:35.528 "iops": 12226.898141925078, 00:07:35.528 "mibps": 1528.3622677406347, 00:07:35.528 "io_failed": 1, 00:07:35.528 "io_timeout": 0, 00:07:35.528 "avg_latency_us": 114.74724755784523, 00:07:35.528 "min_latency_us": 28.618340611353712, 00:07:35.528 "max_latency_us": 1845.8829694323144 00:07:35.528 } 00:07:35.528 ], 00:07:35.528 "core_count": 1 00:07:35.528 } 00:07:35.528 10:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.528 10:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61699 00:07:35.528 10:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 61699 ']' 00:07:35.528 10:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 61699 00:07:35.528 10:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:07:35.528 10:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:35.529 10:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61699 00:07:35.788 10:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:35.788 10:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:35.788 killing process with pid 61699 00:07:35.788 10:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61699' 00:07:35.788 10:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 61699 00:07:35.788 [2024-11-15 10:52:42.470500] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:35.788 10:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 61699 00:07:35.788 [2024-11-15 10:52:42.631720] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:37.167 10:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:37.167 10:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:37.167 10:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LA83ekYbJB 00:07:37.167 10:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:37.167 10:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:37.167 10:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:37.167 10:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:37.167 10:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:37.167 00:07:37.167 real 0m4.646s 00:07:37.167 user 0m5.434s 00:07:37.167 sys 0m0.626s 00:07:37.167 10:52:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:37.167 10:52:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.167 ************************************ 00:07:37.167 END TEST raid_write_error_test 00:07:37.167 ************************************ 00:07:37.167 10:52:44 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:37.167 10:52:44 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:37.426 10:52:44 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:37.426 10:52:44 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:37.426 10:52:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:37.426 ************************************ 00:07:37.426 START TEST raid_state_function_test 00:07:37.426 ************************************ 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 false 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61843 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61843' 00:07:37.426 Process raid pid: 61843 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61843 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 61843 ']' 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:37.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:37.426 10:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.426 [2024-11-15 10:52:44.213753] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:07:37.426 [2024-11-15 10:52:44.213891] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.685 [2024-11-15 10:52:44.389652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.685 [2024-11-15 10:52:44.533326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.947 [2024-11-15 10:52:44.780817] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.947 [2024-11-15 10:52:44.780882] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.209 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:38.209 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:07:38.209 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:38.209 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.209 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.209 [2024-11-15 10:52:45.062879] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:38.209 [2024-11-15 10:52:45.062967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:38.209 [2024-11-15 10:52:45.062980] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:38.209 [2024-11-15 10:52:45.063000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:38.209 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.209 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:38.209 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.209 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:38.209 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:38.209 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.209 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:38.209 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.209 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.209 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.209 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.209 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.209 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.209 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.209 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.209 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.209 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.209 "name": "Existed_Raid", 00:07:38.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.209 "strip_size_kb": 64, 00:07:38.209 "state": "configuring", 00:07:38.209 "raid_level": "concat", 00:07:38.209 "superblock": false, 00:07:38.209 "num_base_bdevs": 2, 00:07:38.209 "num_base_bdevs_discovered": 0, 00:07:38.209 "num_base_bdevs_operational": 2, 00:07:38.209 "base_bdevs_list": [ 00:07:38.209 { 00:07:38.209 "name": "BaseBdev1", 00:07:38.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.209 "is_configured": false, 00:07:38.209 "data_offset": 0, 00:07:38.209 "data_size": 0 00:07:38.209 }, 00:07:38.209 { 00:07:38.209 "name": "BaseBdev2", 00:07:38.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.209 "is_configured": false, 00:07:38.209 "data_offset": 0, 00:07:38.209 "data_size": 0 00:07:38.209 } 00:07:38.209 ] 00:07:38.209 }' 00:07:38.209 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.209 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.778 [2024-11-15 10:52:45.514051] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:38.778 [2024-11-15 10:52:45.514212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.778 [2024-11-15 10:52:45.525962] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:38.778 [2024-11-15 10:52:45.526068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:38.778 [2024-11-15 10:52:45.526107] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:38.778 [2024-11-15 10:52:45.526135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.778 [2024-11-15 10:52:45.583743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:38.778 BaseBdev1 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.778 [ 00:07:38.778 { 00:07:38.778 "name": "BaseBdev1", 00:07:38.778 "aliases": [ 00:07:38.778 "683d40dd-39e9-489d-b4a7-4dd087e3739d" 00:07:38.778 ], 00:07:38.778 "product_name": "Malloc disk", 00:07:38.778 "block_size": 512, 00:07:38.778 "num_blocks": 65536, 00:07:38.778 "uuid": "683d40dd-39e9-489d-b4a7-4dd087e3739d", 00:07:38.778 "assigned_rate_limits": { 00:07:38.778 "rw_ios_per_sec": 0, 00:07:38.778 "rw_mbytes_per_sec": 0, 00:07:38.778 "r_mbytes_per_sec": 0, 00:07:38.778 "w_mbytes_per_sec": 0 00:07:38.778 }, 00:07:38.778 "claimed": true, 00:07:38.778 "claim_type": "exclusive_write", 00:07:38.778 "zoned": false, 00:07:38.778 "supported_io_types": { 00:07:38.778 "read": true, 00:07:38.778 "write": true, 00:07:38.778 "unmap": true, 00:07:38.778 "flush": true, 00:07:38.778 "reset": true, 00:07:38.778 "nvme_admin": false, 00:07:38.778 "nvme_io": false, 00:07:38.778 "nvme_io_md": false, 00:07:38.778 "write_zeroes": true, 00:07:38.778 "zcopy": true, 00:07:38.778 "get_zone_info": false, 00:07:38.778 "zone_management": false, 00:07:38.778 "zone_append": false, 00:07:38.778 "compare": false, 00:07:38.778 "compare_and_write": false, 00:07:38.778 "abort": true, 00:07:38.778 "seek_hole": false, 00:07:38.778 "seek_data": false, 00:07:38.778 "copy": true, 00:07:38.778 "nvme_iov_md": false 00:07:38.778 }, 00:07:38.778 "memory_domains": [ 00:07:38.778 { 00:07:38.778 "dma_device_id": "system", 00:07:38.778 "dma_device_type": 1 00:07:38.778 }, 00:07:38.778 { 00:07:38.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.778 "dma_device_type": 2 00:07:38.778 } 00:07:38.778 ], 00:07:38.778 "driver_specific": {} 00:07:38.778 } 00:07:38.778 ] 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.778 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.778 "name": "Existed_Raid", 00:07:38.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.778 "strip_size_kb": 64, 00:07:38.778 "state": "configuring", 00:07:38.778 "raid_level": "concat", 00:07:38.778 "superblock": false, 00:07:38.778 "num_base_bdevs": 2, 00:07:38.779 "num_base_bdevs_discovered": 1, 00:07:38.779 "num_base_bdevs_operational": 2, 00:07:38.779 "base_bdevs_list": [ 00:07:38.779 { 00:07:38.779 "name": "BaseBdev1", 00:07:38.779 "uuid": "683d40dd-39e9-489d-b4a7-4dd087e3739d", 00:07:38.779 "is_configured": true, 00:07:38.779 "data_offset": 0, 00:07:38.779 "data_size": 65536 00:07:38.779 }, 00:07:38.779 { 00:07:38.779 "name": "BaseBdev2", 00:07:38.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.779 "is_configured": false, 00:07:38.779 "data_offset": 0, 00:07:38.779 "data_size": 0 00:07:38.779 } 00:07:38.779 ] 00:07:38.779 }' 00:07:38.779 10:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.779 10:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.348 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:39.348 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.348 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.348 [2024-11-15 10:52:46.094965] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:39.348 [2024-11-15 10:52:46.095133] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:39.348 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.348 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:39.348 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.348 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.348 [2024-11-15 10:52:46.102971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:39.348 [2024-11-15 10:52:46.105255] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:39.348 [2024-11-15 10:52:46.105356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:39.348 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.348 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:39.348 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:39.348 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:39.348 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.348 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:39.348 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:39.348 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.348 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.348 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.348 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.348 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.348 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.348 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.348 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.348 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.348 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.348 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.348 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.348 "name": "Existed_Raid", 00:07:39.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.348 "strip_size_kb": 64, 00:07:39.348 "state": "configuring", 00:07:39.348 "raid_level": "concat", 00:07:39.348 "superblock": false, 00:07:39.348 "num_base_bdevs": 2, 00:07:39.348 "num_base_bdevs_discovered": 1, 00:07:39.348 "num_base_bdevs_operational": 2, 00:07:39.348 "base_bdevs_list": [ 00:07:39.348 { 00:07:39.348 "name": "BaseBdev1", 00:07:39.348 "uuid": "683d40dd-39e9-489d-b4a7-4dd087e3739d", 00:07:39.348 "is_configured": true, 00:07:39.348 "data_offset": 0, 00:07:39.348 "data_size": 65536 00:07:39.348 }, 00:07:39.348 { 00:07:39.348 "name": "BaseBdev2", 00:07:39.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.348 "is_configured": false, 00:07:39.348 "data_offset": 0, 00:07:39.348 "data_size": 0 00:07:39.348 } 00:07:39.348 ] 00:07:39.348 }' 00:07:39.348 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.348 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.916 [2024-11-15 10:52:46.633180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:39.916 [2024-11-15 10:52:46.633326] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:39.916 [2024-11-15 10:52:46.633357] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:39.916 [2024-11-15 10:52:46.633713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:39.916 [2024-11-15 10:52:46.633959] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:39.916 [2024-11-15 10:52:46.634008] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:39.916 [2024-11-15 10:52:46.634364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.916 BaseBdev2 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.916 [ 00:07:39.916 { 00:07:39.916 "name": "BaseBdev2", 00:07:39.916 "aliases": [ 00:07:39.916 "1e369ada-0d65-4ce0-b69b-e0ad66f7874a" 00:07:39.916 ], 00:07:39.916 "product_name": "Malloc disk", 00:07:39.916 "block_size": 512, 00:07:39.916 "num_blocks": 65536, 00:07:39.916 "uuid": "1e369ada-0d65-4ce0-b69b-e0ad66f7874a", 00:07:39.916 "assigned_rate_limits": { 00:07:39.916 "rw_ios_per_sec": 0, 00:07:39.916 "rw_mbytes_per_sec": 0, 00:07:39.916 "r_mbytes_per_sec": 0, 00:07:39.916 "w_mbytes_per_sec": 0 00:07:39.916 }, 00:07:39.916 "claimed": true, 00:07:39.916 "claim_type": "exclusive_write", 00:07:39.916 "zoned": false, 00:07:39.916 "supported_io_types": { 00:07:39.916 "read": true, 00:07:39.916 "write": true, 00:07:39.916 "unmap": true, 00:07:39.916 "flush": true, 00:07:39.916 "reset": true, 00:07:39.916 "nvme_admin": false, 00:07:39.916 "nvme_io": false, 00:07:39.916 "nvme_io_md": false, 00:07:39.916 "write_zeroes": true, 00:07:39.916 "zcopy": true, 00:07:39.916 "get_zone_info": false, 00:07:39.916 "zone_management": false, 00:07:39.916 "zone_append": false, 00:07:39.916 "compare": false, 00:07:39.916 "compare_and_write": false, 00:07:39.916 "abort": true, 00:07:39.916 "seek_hole": false, 00:07:39.916 "seek_data": false, 00:07:39.916 "copy": true, 00:07:39.916 "nvme_iov_md": false 00:07:39.916 }, 00:07:39.916 "memory_domains": [ 00:07:39.916 { 00:07:39.916 "dma_device_id": "system", 00:07:39.916 "dma_device_type": 1 00:07:39.916 }, 00:07:39.916 { 00:07:39.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.916 "dma_device_type": 2 00:07:39.916 } 00:07:39.916 ], 00:07:39.916 "driver_specific": {} 00:07:39.916 } 00:07:39.916 ] 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.916 "name": "Existed_Raid", 00:07:39.916 "uuid": "2996abec-16c5-4456-b3a7-85a6c1486ea9", 00:07:39.916 "strip_size_kb": 64, 00:07:39.916 "state": "online", 00:07:39.916 "raid_level": "concat", 00:07:39.916 "superblock": false, 00:07:39.916 "num_base_bdevs": 2, 00:07:39.916 "num_base_bdevs_discovered": 2, 00:07:39.916 "num_base_bdevs_operational": 2, 00:07:39.916 "base_bdevs_list": [ 00:07:39.916 { 00:07:39.916 "name": "BaseBdev1", 00:07:39.916 "uuid": "683d40dd-39e9-489d-b4a7-4dd087e3739d", 00:07:39.916 "is_configured": true, 00:07:39.916 "data_offset": 0, 00:07:39.916 "data_size": 65536 00:07:39.916 }, 00:07:39.916 { 00:07:39.916 "name": "BaseBdev2", 00:07:39.916 "uuid": "1e369ada-0d65-4ce0-b69b-e0ad66f7874a", 00:07:39.916 "is_configured": true, 00:07:39.916 "data_offset": 0, 00:07:39.916 "data_size": 65536 00:07:39.916 } 00:07:39.916 ] 00:07:39.916 }' 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.916 10:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.175 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:40.175 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:40.175 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:40.175 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:40.175 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:40.175 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:40.432 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:40.432 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:40.432 10:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.432 10:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.432 [2024-11-15 10:52:47.108791] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:40.432 10:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.432 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:40.432 "name": "Existed_Raid", 00:07:40.432 "aliases": [ 00:07:40.432 "2996abec-16c5-4456-b3a7-85a6c1486ea9" 00:07:40.432 ], 00:07:40.432 "product_name": "Raid Volume", 00:07:40.432 "block_size": 512, 00:07:40.432 "num_blocks": 131072, 00:07:40.432 "uuid": "2996abec-16c5-4456-b3a7-85a6c1486ea9", 00:07:40.432 "assigned_rate_limits": { 00:07:40.432 "rw_ios_per_sec": 0, 00:07:40.432 "rw_mbytes_per_sec": 0, 00:07:40.432 "r_mbytes_per_sec": 0, 00:07:40.432 "w_mbytes_per_sec": 0 00:07:40.432 }, 00:07:40.432 "claimed": false, 00:07:40.432 "zoned": false, 00:07:40.432 "supported_io_types": { 00:07:40.432 "read": true, 00:07:40.432 "write": true, 00:07:40.432 "unmap": true, 00:07:40.432 "flush": true, 00:07:40.432 "reset": true, 00:07:40.432 "nvme_admin": false, 00:07:40.432 "nvme_io": false, 00:07:40.432 "nvme_io_md": false, 00:07:40.432 "write_zeroes": true, 00:07:40.432 "zcopy": false, 00:07:40.432 "get_zone_info": false, 00:07:40.432 "zone_management": false, 00:07:40.432 "zone_append": false, 00:07:40.432 "compare": false, 00:07:40.432 "compare_and_write": false, 00:07:40.432 "abort": false, 00:07:40.432 "seek_hole": false, 00:07:40.432 "seek_data": false, 00:07:40.432 "copy": false, 00:07:40.432 "nvme_iov_md": false 00:07:40.432 }, 00:07:40.432 "memory_domains": [ 00:07:40.432 { 00:07:40.432 "dma_device_id": "system", 00:07:40.432 "dma_device_type": 1 00:07:40.432 }, 00:07:40.432 { 00:07:40.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.432 "dma_device_type": 2 00:07:40.432 }, 00:07:40.432 { 00:07:40.432 "dma_device_id": "system", 00:07:40.432 "dma_device_type": 1 00:07:40.432 }, 00:07:40.432 { 00:07:40.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.432 "dma_device_type": 2 00:07:40.432 } 00:07:40.432 ], 00:07:40.432 "driver_specific": { 00:07:40.432 "raid": { 00:07:40.432 "uuid": "2996abec-16c5-4456-b3a7-85a6c1486ea9", 00:07:40.432 "strip_size_kb": 64, 00:07:40.432 "state": "online", 00:07:40.432 "raid_level": "concat", 00:07:40.432 "superblock": false, 00:07:40.432 "num_base_bdevs": 2, 00:07:40.432 "num_base_bdevs_discovered": 2, 00:07:40.432 "num_base_bdevs_operational": 2, 00:07:40.432 "base_bdevs_list": [ 00:07:40.432 { 00:07:40.432 "name": "BaseBdev1", 00:07:40.432 "uuid": "683d40dd-39e9-489d-b4a7-4dd087e3739d", 00:07:40.432 "is_configured": true, 00:07:40.432 "data_offset": 0, 00:07:40.432 "data_size": 65536 00:07:40.432 }, 00:07:40.432 { 00:07:40.432 "name": "BaseBdev2", 00:07:40.432 "uuid": "1e369ada-0d65-4ce0-b69b-e0ad66f7874a", 00:07:40.432 "is_configured": true, 00:07:40.432 "data_offset": 0, 00:07:40.432 "data_size": 65536 00:07:40.432 } 00:07:40.432 ] 00:07:40.432 } 00:07:40.432 } 00:07:40.432 }' 00:07:40.432 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:40.432 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:40.432 BaseBdev2' 00:07:40.432 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.432 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:40.432 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:40.432 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:40.432 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.432 10:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.432 10:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.432 10:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.433 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:40.433 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:40.433 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:40.433 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.433 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:40.433 10:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.433 10:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.433 10:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.433 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:40.433 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:40.433 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:40.433 10:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.433 10:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.433 [2024-11-15 10:52:47.320170] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:40.433 [2024-11-15 10:52:47.320259] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:40.433 [2024-11-15 10:52:47.320417] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:40.691 10:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.691 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:40.691 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:40.691 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:40.691 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:40.691 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:40.691 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:40.691 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.691 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:40.691 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:40.691 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.691 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:40.691 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.691 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.691 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.691 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.691 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.691 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.691 10:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.691 10:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.691 10:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.691 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.691 "name": "Existed_Raid", 00:07:40.691 "uuid": "2996abec-16c5-4456-b3a7-85a6c1486ea9", 00:07:40.691 "strip_size_kb": 64, 00:07:40.691 "state": "offline", 00:07:40.691 "raid_level": "concat", 00:07:40.691 "superblock": false, 00:07:40.691 "num_base_bdevs": 2, 00:07:40.691 "num_base_bdevs_discovered": 1, 00:07:40.691 "num_base_bdevs_operational": 1, 00:07:40.691 "base_bdevs_list": [ 00:07:40.691 { 00:07:40.691 "name": null, 00:07:40.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.691 "is_configured": false, 00:07:40.691 "data_offset": 0, 00:07:40.691 "data_size": 65536 00:07:40.691 }, 00:07:40.691 { 00:07:40.691 "name": "BaseBdev2", 00:07:40.691 "uuid": "1e369ada-0d65-4ce0-b69b-e0ad66f7874a", 00:07:40.691 "is_configured": true, 00:07:40.691 "data_offset": 0, 00:07:40.691 "data_size": 65536 00:07:40.691 } 00:07:40.691 ] 00:07:40.691 }' 00:07:40.691 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.691 10:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.257 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:41.257 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:41.257 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.257 10:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.257 10:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.257 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:41.257 10:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.257 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:41.257 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:41.257 10:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:41.257 10:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.257 10:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.257 [2024-11-15 10:52:47.951671] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:41.257 [2024-11-15 10:52:47.951744] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:41.257 10:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.257 10:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:41.257 10:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:41.257 10:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.257 10:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:41.257 10:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.257 10:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.257 10:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.257 10:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:41.257 10:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:41.257 10:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:41.257 10:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61843 00:07:41.257 10:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 61843 ']' 00:07:41.257 10:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 61843 00:07:41.257 10:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:07:41.257 10:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:41.257 10:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61843 00:07:41.257 10:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:41.257 10:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:41.257 10:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61843' 00:07:41.257 killing process with pid 61843 00:07:41.257 10:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 61843 00:07:41.257 [2024-11-15 10:52:48.148895] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:41.257 10:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 61843 00:07:41.257 [2024-11-15 10:52:48.168201] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:42.666 ************************************ 00:07:42.666 END TEST raid_state_function_test 00:07:42.666 ************************************ 00:07:42.666 00:07:42.666 real 0m5.217s 00:07:42.666 user 0m7.431s 00:07:42.666 sys 0m0.938s 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.666 10:52:49 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:42.666 10:52:49 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:42.666 10:52:49 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:42.666 10:52:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:42.666 ************************************ 00:07:42.666 START TEST raid_state_function_test_sb 00:07:42.666 ************************************ 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 true 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:42.666 Process raid pid: 62096 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62096 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62096' 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62096 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 62096 ']' 00:07:42.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:42.666 10:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.666 [2024-11-15 10:52:49.486353] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:07:42.666 [2024-11-15 10:52:49.486477] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.924 [2024-11-15 10:52:49.661780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.924 [2024-11-15 10:52:49.782035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.181 [2024-11-15 10:52:50.004888] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.181 [2024-11-15 10:52:50.004937] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.438 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:43.438 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:07:43.438 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:43.438 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.438 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.696 [2024-11-15 10:52:50.363399] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:43.696 [2024-11-15 10:52:50.363457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:43.696 [2024-11-15 10:52:50.363470] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:43.696 [2024-11-15 10:52:50.363481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:43.696 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.696 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:43.696 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.696 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:43.696 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:43.696 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.696 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.696 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.696 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.696 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.696 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.696 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.697 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.697 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.697 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.697 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.697 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.697 "name": "Existed_Raid", 00:07:43.697 "uuid": "027e397e-6c2d-4c37-bc35-4eaff1aa1344", 00:07:43.697 "strip_size_kb": 64, 00:07:43.697 "state": "configuring", 00:07:43.697 "raid_level": "concat", 00:07:43.697 "superblock": true, 00:07:43.697 "num_base_bdevs": 2, 00:07:43.697 "num_base_bdevs_discovered": 0, 00:07:43.697 "num_base_bdevs_operational": 2, 00:07:43.697 "base_bdevs_list": [ 00:07:43.697 { 00:07:43.697 "name": "BaseBdev1", 00:07:43.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.697 "is_configured": false, 00:07:43.697 "data_offset": 0, 00:07:43.697 "data_size": 0 00:07:43.697 }, 00:07:43.697 { 00:07:43.697 "name": "BaseBdev2", 00:07:43.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.697 "is_configured": false, 00:07:43.697 "data_offset": 0, 00:07:43.697 "data_size": 0 00:07:43.697 } 00:07:43.697 ] 00:07:43.697 }' 00:07:43.697 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.697 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.955 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:43.955 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.955 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.955 [2024-11-15 10:52:50.830521] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:43.955 [2024-11-15 10:52:50.830561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:43.955 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.955 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:43.955 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.955 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.955 [2024-11-15 10:52:50.842508] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:43.955 [2024-11-15 10:52:50.842557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:43.955 [2024-11-15 10:52:50.842568] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:43.955 [2024-11-15 10:52:50.842582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:43.955 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.955 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:43.955 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.955 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.213 [2024-11-15 10:52:50.893489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:44.213 BaseBdev1 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.213 [ 00:07:44.213 { 00:07:44.213 "name": "BaseBdev1", 00:07:44.213 "aliases": [ 00:07:44.213 "cf4514a7-9dfc-4567-86d7-fee131d44095" 00:07:44.213 ], 00:07:44.213 "product_name": "Malloc disk", 00:07:44.213 "block_size": 512, 00:07:44.213 "num_blocks": 65536, 00:07:44.213 "uuid": "cf4514a7-9dfc-4567-86d7-fee131d44095", 00:07:44.213 "assigned_rate_limits": { 00:07:44.213 "rw_ios_per_sec": 0, 00:07:44.213 "rw_mbytes_per_sec": 0, 00:07:44.213 "r_mbytes_per_sec": 0, 00:07:44.213 "w_mbytes_per_sec": 0 00:07:44.213 }, 00:07:44.213 "claimed": true, 00:07:44.213 "claim_type": "exclusive_write", 00:07:44.213 "zoned": false, 00:07:44.213 "supported_io_types": { 00:07:44.213 "read": true, 00:07:44.213 "write": true, 00:07:44.213 "unmap": true, 00:07:44.213 "flush": true, 00:07:44.213 "reset": true, 00:07:44.213 "nvme_admin": false, 00:07:44.213 "nvme_io": false, 00:07:44.213 "nvme_io_md": false, 00:07:44.213 "write_zeroes": true, 00:07:44.213 "zcopy": true, 00:07:44.213 "get_zone_info": false, 00:07:44.213 "zone_management": false, 00:07:44.213 "zone_append": false, 00:07:44.213 "compare": false, 00:07:44.213 "compare_and_write": false, 00:07:44.213 "abort": true, 00:07:44.213 "seek_hole": false, 00:07:44.213 "seek_data": false, 00:07:44.213 "copy": true, 00:07:44.213 "nvme_iov_md": false 00:07:44.213 }, 00:07:44.213 "memory_domains": [ 00:07:44.213 { 00:07:44.213 "dma_device_id": "system", 00:07:44.213 "dma_device_type": 1 00:07:44.213 }, 00:07:44.213 { 00:07:44.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.213 "dma_device_type": 2 00:07:44.213 } 00:07:44.213 ], 00:07:44.213 "driver_specific": {} 00:07:44.213 } 00:07:44.213 ] 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.213 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.213 "name": "Existed_Raid", 00:07:44.214 "uuid": "6cf48329-17e0-4a4b-90da-c792c4d15cd4", 00:07:44.214 "strip_size_kb": 64, 00:07:44.214 "state": "configuring", 00:07:44.214 "raid_level": "concat", 00:07:44.214 "superblock": true, 00:07:44.214 "num_base_bdevs": 2, 00:07:44.214 "num_base_bdevs_discovered": 1, 00:07:44.214 "num_base_bdevs_operational": 2, 00:07:44.214 "base_bdevs_list": [ 00:07:44.214 { 00:07:44.214 "name": "BaseBdev1", 00:07:44.214 "uuid": "cf4514a7-9dfc-4567-86d7-fee131d44095", 00:07:44.214 "is_configured": true, 00:07:44.214 "data_offset": 2048, 00:07:44.214 "data_size": 63488 00:07:44.214 }, 00:07:44.214 { 00:07:44.214 "name": "BaseBdev2", 00:07:44.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.214 "is_configured": false, 00:07:44.214 "data_offset": 0, 00:07:44.214 "data_size": 0 00:07:44.214 } 00:07:44.214 ] 00:07:44.214 }' 00:07:44.214 10:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.214 10:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.471 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:44.471 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.471 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.729 [2024-11-15 10:52:51.396706] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:44.729 [2024-11-15 10:52:51.396768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:44.729 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.729 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:44.729 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.729 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.729 [2024-11-15 10:52:51.408751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:44.729 [2024-11-15 10:52:51.410827] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.729 [2024-11-15 10:52:51.410933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:44.729 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.729 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:44.729 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:44.729 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:44.729 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.729 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.729 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:44.729 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.729 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.729 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.729 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.729 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.729 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.729 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.729 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.729 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.729 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.729 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.729 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.729 "name": "Existed_Raid", 00:07:44.729 "uuid": "b2b46065-90f4-4c83-ac23-ea7cb0529809", 00:07:44.729 "strip_size_kb": 64, 00:07:44.729 "state": "configuring", 00:07:44.729 "raid_level": "concat", 00:07:44.729 "superblock": true, 00:07:44.729 "num_base_bdevs": 2, 00:07:44.729 "num_base_bdevs_discovered": 1, 00:07:44.729 "num_base_bdevs_operational": 2, 00:07:44.729 "base_bdevs_list": [ 00:07:44.729 { 00:07:44.729 "name": "BaseBdev1", 00:07:44.729 "uuid": "cf4514a7-9dfc-4567-86d7-fee131d44095", 00:07:44.729 "is_configured": true, 00:07:44.729 "data_offset": 2048, 00:07:44.729 "data_size": 63488 00:07:44.729 }, 00:07:44.729 { 00:07:44.729 "name": "BaseBdev2", 00:07:44.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.729 "is_configured": false, 00:07:44.729 "data_offset": 0, 00:07:44.729 "data_size": 0 00:07:44.729 } 00:07:44.729 ] 00:07:44.729 }' 00:07:44.729 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.729 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.990 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:44.990 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.990 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.990 [2024-11-15 10:52:51.894013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:44.990 [2024-11-15 10:52:51.894413] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:44.990 [2024-11-15 10:52:51.894475] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:44.990 [2024-11-15 10:52:51.894800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:44.990 BaseBdev2 00:07:44.990 [2024-11-15 10:52:51.895003] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:44.990 [2024-11-15 10:52:51.895068] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:44.990 [2024-11-15 10:52:51.895264] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.990 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.990 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:44.990 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:07:44.990 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:44.990 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:44.990 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:44.990 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:44.990 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:44.990 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.990 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.990 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.990 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:44.990 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.990 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.252 [ 00:07:45.252 { 00:07:45.252 "name": "BaseBdev2", 00:07:45.252 "aliases": [ 00:07:45.252 "cb6c3d6d-d71e-4367-917d-9e15c8231862" 00:07:45.252 ], 00:07:45.252 "product_name": "Malloc disk", 00:07:45.252 "block_size": 512, 00:07:45.252 "num_blocks": 65536, 00:07:45.252 "uuid": "cb6c3d6d-d71e-4367-917d-9e15c8231862", 00:07:45.252 "assigned_rate_limits": { 00:07:45.252 "rw_ios_per_sec": 0, 00:07:45.252 "rw_mbytes_per_sec": 0, 00:07:45.252 "r_mbytes_per_sec": 0, 00:07:45.252 "w_mbytes_per_sec": 0 00:07:45.252 }, 00:07:45.252 "claimed": true, 00:07:45.252 "claim_type": "exclusive_write", 00:07:45.252 "zoned": false, 00:07:45.252 "supported_io_types": { 00:07:45.252 "read": true, 00:07:45.252 "write": true, 00:07:45.252 "unmap": true, 00:07:45.252 "flush": true, 00:07:45.252 "reset": true, 00:07:45.252 "nvme_admin": false, 00:07:45.252 "nvme_io": false, 00:07:45.252 "nvme_io_md": false, 00:07:45.253 "write_zeroes": true, 00:07:45.253 "zcopy": true, 00:07:45.253 "get_zone_info": false, 00:07:45.253 "zone_management": false, 00:07:45.253 "zone_append": false, 00:07:45.253 "compare": false, 00:07:45.253 "compare_and_write": false, 00:07:45.253 "abort": true, 00:07:45.253 "seek_hole": false, 00:07:45.253 "seek_data": false, 00:07:45.253 "copy": true, 00:07:45.253 "nvme_iov_md": false 00:07:45.253 }, 00:07:45.253 "memory_domains": [ 00:07:45.253 { 00:07:45.253 "dma_device_id": "system", 00:07:45.253 "dma_device_type": 1 00:07:45.253 }, 00:07:45.253 { 00:07:45.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.253 "dma_device_type": 2 00:07:45.253 } 00:07:45.253 ], 00:07:45.253 "driver_specific": {} 00:07:45.253 } 00:07:45.253 ] 00:07:45.253 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.253 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:45.253 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:45.253 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:45.253 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:45.253 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.253 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:45.253 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:45.253 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.253 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.253 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.253 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.253 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.253 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.253 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.253 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.253 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.253 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.253 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.253 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.253 "name": "Existed_Raid", 00:07:45.253 "uuid": "b2b46065-90f4-4c83-ac23-ea7cb0529809", 00:07:45.253 "strip_size_kb": 64, 00:07:45.253 "state": "online", 00:07:45.253 "raid_level": "concat", 00:07:45.253 "superblock": true, 00:07:45.253 "num_base_bdevs": 2, 00:07:45.253 "num_base_bdevs_discovered": 2, 00:07:45.253 "num_base_bdevs_operational": 2, 00:07:45.253 "base_bdevs_list": [ 00:07:45.253 { 00:07:45.253 "name": "BaseBdev1", 00:07:45.253 "uuid": "cf4514a7-9dfc-4567-86d7-fee131d44095", 00:07:45.253 "is_configured": true, 00:07:45.253 "data_offset": 2048, 00:07:45.253 "data_size": 63488 00:07:45.253 }, 00:07:45.253 { 00:07:45.253 "name": "BaseBdev2", 00:07:45.253 "uuid": "cb6c3d6d-d71e-4367-917d-9e15c8231862", 00:07:45.253 "is_configured": true, 00:07:45.253 "data_offset": 2048, 00:07:45.253 "data_size": 63488 00:07:45.253 } 00:07:45.253 ] 00:07:45.253 }' 00:07:45.253 10:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.253 10:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.511 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:45.511 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:45.511 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:45.511 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:45.511 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:45.511 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:45.511 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:45.511 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:45.511 10:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.511 10:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.511 [2024-11-15 10:52:52.405491] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:45.511 10:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.770 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:45.770 "name": "Existed_Raid", 00:07:45.770 "aliases": [ 00:07:45.770 "b2b46065-90f4-4c83-ac23-ea7cb0529809" 00:07:45.770 ], 00:07:45.770 "product_name": "Raid Volume", 00:07:45.770 "block_size": 512, 00:07:45.770 "num_blocks": 126976, 00:07:45.770 "uuid": "b2b46065-90f4-4c83-ac23-ea7cb0529809", 00:07:45.770 "assigned_rate_limits": { 00:07:45.770 "rw_ios_per_sec": 0, 00:07:45.770 "rw_mbytes_per_sec": 0, 00:07:45.770 "r_mbytes_per_sec": 0, 00:07:45.770 "w_mbytes_per_sec": 0 00:07:45.770 }, 00:07:45.770 "claimed": false, 00:07:45.770 "zoned": false, 00:07:45.770 "supported_io_types": { 00:07:45.770 "read": true, 00:07:45.770 "write": true, 00:07:45.770 "unmap": true, 00:07:45.770 "flush": true, 00:07:45.770 "reset": true, 00:07:45.770 "nvme_admin": false, 00:07:45.770 "nvme_io": false, 00:07:45.770 "nvme_io_md": false, 00:07:45.770 "write_zeroes": true, 00:07:45.770 "zcopy": false, 00:07:45.770 "get_zone_info": false, 00:07:45.770 "zone_management": false, 00:07:45.770 "zone_append": false, 00:07:45.770 "compare": false, 00:07:45.770 "compare_and_write": false, 00:07:45.770 "abort": false, 00:07:45.770 "seek_hole": false, 00:07:45.770 "seek_data": false, 00:07:45.770 "copy": false, 00:07:45.770 "nvme_iov_md": false 00:07:45.770 }, 00:07:45.770 "memory_domains": [ 00:07:45.770 { 00:07:45.770 "dma_device_id": "system", 00:07:45.770 "dma_device_type": 1 00:07:45.770 }, 00:07:45.770 { 00:07:45.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.770 "dma_device_type": 2 00:07:45.770 }, 00:07:45.770 { 00:07:45.770 "dma_device_id": "system", 00:07:45.770 "dma_device_type": 1 00:07:45.770 }, 00:07:45.770 { 00:07:45.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.770 "dma_device_type": 2 00:07:45.770 } 00:07:45.770 ], 00:07:45.770 "driver_specific": { 00:07:45.770 "raid": { 00:07:45.770 "uuid": "b2b46065-90f4-4c83-ac23-ea7cb0529809", 00:07:45.770 "strip_size_kb": 64, 00:07:45.770 "state": "online", 00:07:45.770 "raid_level": "concat", 00:07:45.770 "superblock": true, 00:07:45.770 "num_base_bdevs": 2, 00:07:45.770 "num_base_bdevs_discovered": 2, 00:07:45.770 "num_base_bdevs_operational": 2, 00:07:45.770 "base_bdevs_list": [ 00:07:45.770 { 00:07:45.770 "name": "BaseBdev1", 00:07:45.770 "uuid": "cf4514a7-9dfc-4567-86d7-fee131d44095", 00:07:45.770 "is_configured": true, 00:07:45.770 "data_offset": 2048, 00:07:45.770 "data_size": 63488 00:07:45.770 }, 00:07:45.770 { 00:07:45.770 "name": "BaseBdev2", 00:07:45.770 "uuid": "cb6c3d6d-d71e-4367-917d-9e15c8231862", 00:07:45.770 "is_configured": true, 00:07:45.770 "data_offset": 2048, 00:07:45.770 "data_size": 63488 00:07:45.770 } 00:07:45.770 ] 00:07:45.770 } 00:07:45.770 } 00:07:45.770 }' 00:07:45.770 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:45.770 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:45.770 BaseBdev2' 00:07:45.770 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.770 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:45.770 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:45.770 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:45.770 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.770 10:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.770 10:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.770 10:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.770 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.770 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.770 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:45.770 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:45.770 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.770 10:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.770 10:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.770 10:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.770 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.770 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.770 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:45.770 10:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.770 10:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.770 [2024-11-15 10:52:52.652854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:45.770 [2024-11-15 10:52:52.652953] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:45.770 [2024-11-15 10:52:52.653042] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:46.029 10:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.029 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:46.029 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:46.029 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:46.029 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:46.029 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:46.029 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:46.029 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.029 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:46.029 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:46.029 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.029 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:46.029 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.029 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.029 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.029 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.029 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.029 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.029 10:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.029 10:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.029 10:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.029 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.029 "name": "Existed_Raid", 00:07:46.029 "uuid": "b2b46065-90f4-4c83-ac23-ea7cb0529809", 00:07:46.029 "strip_size_kb": 64, 00:07:46.029 "state": "offline", 00:07:46.029 "raid_level": "concat", 00:07:46.029 "superblock": true, 00:07:46.029 "num_base_bdevs": 2, 00:07:46.029 "num_base_bdevs_discovered": 1, 00:07:46.029 "num_base_bdevs_operational": 1, 00:07:46.029 "base_bdevs_list": [ 00:07:46.029 { 00:07:46.029 "name": null, 00:07:46.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.029 "is_configured": false, 00:07:46.029 "data_offset": 0, 00:07:46.029 "data_size": 63488 00:07:46.029 }, 00:07:46.029 { 00:07:46.029 "name": "BaseBdev2", 00:07:46.029 "uuid": "cb6c3d6d-d71e-4367-917d-9e15c8231862", 00:07:46.029 "is_configured": true, 00:07:46.029 "data_offset": 2048, 00:07:46.029 "data_size": 63488 00:07:46.029 } 00:07:46.029 ] 00:07:46.029 }' 00:07:46.029 10:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.029 10:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.309 10:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:46.309 10:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:46.309 10:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:46.309 10:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.309 10:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.309 10:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.309 10:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.309 10:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:46.309 10:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:46.309 10:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:46.309 10:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.309 10:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.309 [2024-11-15 10:52:53.175046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:46.309 [2024-11-15 10:52:53.175097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:46.567 10:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.567 10:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:46.567 10:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:46.567 10:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.567 10:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.567 10:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.567 10:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:46.567 10:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.567 10:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:46.567 10:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:46.567 10:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:46.567 10:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62096 00:07:46.567 10:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 62096 ']' 00:07:46.567 10:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 62096 00:07:46.567 10:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:07:46.567 10:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:46.567 10:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62096 00:07:46.567 killing process with pid 62096 00:07:46.568 10:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:46.568 10:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:46.568 10:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62096' 00:07:46.568 10:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 62096 00:07:46.568 [2024-11-15 10:52:53.375191] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:46.568 10:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 62096 00:07:46.568 [2024-11-15 10:52:53.393007] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:47.955 10:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:47.955 00:07:47.955 real 0m5.143s 00:07:47.955 user 0m7.403s 00:07:47.955 sys 0m0.853s 00:07:47.955 10:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:47.955 10:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.955 ************************************ 00:07:47.955 END TEST raid_state_function_test_sb 00:07:47.955 ************************************ 00:07:47.955 10:52:54 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:47.955 10:52:54 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:47.955 10:52:54 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:47.955 10:52:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:47.955 ************************************ 00:07:47.955 START TEST raid_superblock_test 00:07:47.955 ************************************ 00:07:47.955 10:52:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 2 00:07:47.955 10:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:47.955 10:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:47.955 10:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:47.955 10:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:47.955 10:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:47.955 10:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:47.956 10:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:47.956 10:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:47.956 10:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:47.956 10:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:47.956 10:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:47.956 10:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:47.956 10:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:47.956 10:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:47.956 10:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:47.956 10:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:47.956 10:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62348 00:07:47.956 10:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:47.956 10:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62348 00:07:47.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.956 10:52:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 62348 ']' 00:07:47.956 10:52:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.956 10:52:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:47.956 10:52:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.956 10:52:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:47.956 10:52:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.956 [2024-11-15 10:52:54.698933] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:07:47.956 [2024-11-15 10:52:54.699056] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62348 ] 00:07:47.956 [2024-11-15 10:52:54.876528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.216 [2024-11-15 10:52:54.994645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.475 [2024-11-15 10:52:55.202062] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.475 [2024-11-15 10:52:55.202106] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.734 malloc1 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.734 [2024-11-15 10:52:55.594048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:48.734 [2024-11-15 10:52:55.594118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.734 [2024-11-15 10:52:55.594142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:48.734 [2024-11-15 10:52:55.594150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.734 [2024-11-15 10:52:55.596411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.734 [2024-11-15 10:52:55.596449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:48.734 pt1 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.734 malloc2 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.734 [2024-11-15 10:52:55.647954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:48.734 [2024-11-15 10:52:55.648060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.734 [2024-11-15 10:52:55.648101] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:48.734 [2024-11-15 10:52:55.648130] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.734 [2024-11-15 10:52:55.650199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.734 [2024-11-15 10:52:55.650267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:48.734 pt2 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.734 10:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.993 [2024-11-15 10:52:55.660052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:48.993 [2024-11-15 10:52:55.661993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:48.993 [2024-11-15 10:52:55.662241] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:48.993 [2024-11-15 10:52:55.662293] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:48.993 [2024-11-15 10:52:55.662637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:48.993 [2024-11-15 10:52:55.662853] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:48.994 [2024-11-15 10:52:55.662897] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:48.994 [2024-11-15 10:52:55.663119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:48.994 10:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.994 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:48.994 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:48.994 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.994 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:48.994 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.994 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.994 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.994 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.994 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.994 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.994 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.994 10:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.994 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:48.994 10:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.994 10:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.994 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.994 "name": "raid_bdev1", 00:07:48.994 "uuid": "568c3fb6-41d3-40ad-ac54-a054bf31ad2e", 00:07:48.994 "strip_size_kb": 64, 00:07:48.994 "state": "online", 00:07:48.994 "raid_level": "concat", 00:07:48.994 "superblock": true, 00:07:48.994 "num_base_bdevs": 2, 00:07:48.994 "num_base_bdevs_discovered": 2, 00:07:48.994 "num_base_bdevs_operational": 2, 00:07:48.994 "base_bdevs_list": [ 00:07:48.994 { 00:07:48.994 "name": "pt1", 00:07:48.994 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:48.994 "is_configured": true, 00:07:48.994 "data_offset": 2048, 00:07:48.994 "data_size": 63488 00:07:48.994 }, 00:07:48.994 { 00:07:48.994 "name": "pt2", 00:07:48.994 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:48.994 "is_configured": true, 00:07:48.994 "data_offset": 2048, 00:07:48.994 "data_size": 63488 00:07:48.994 } 00:07:48.994 ] 00:07:48.994 }' 00:07:48.994 10:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.994 10:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.253 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:49.253 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:49.253 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:49.253 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:49.253 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:49.253 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:49.253 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:49.253 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.253 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:49.253 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.253 [2024-11-15 10:52:56.123446] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:49.253 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.253 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:49.253 "name": "raid_bdev1", 00:07:49.253 "aliases": [ 00:07:49.253 "568c3fb6-41d3-40ad-ac54-a054bf31ad2e" 00:07:49.253 ], 00:07:49.253 "product_name": "Raid Volume", 00:07:49.253 "block_size": 512, 00:07:49.253 "num_blocks": 126976, 00:07:49.253 "uuid": "568c3fb6-41d3-40ad-ac54-a054bf31ad2e", 00:07:49.253 "assigned_rate_limits": { 00:07:49.253 "rw_ios_per_sec": 0, 00:07:49.253 "rw_mbytes_per_sec": 0, 00:07:49.253 "r_mbytes_per_sec": 0, 00:07:49.253 "w_mbytes_per_sec": 0 00:07:49.253 }, 00:07:49.253 "claimed": false, 00:07:49.253 "zoned": false, 00:07:49.253 "supported_io_types": { 00:07:49.253 "read": true, 00:07:49.253 "write": true, 00:07:49.253 "unmap": true, 00:07:49.253 "flush": true, 00:07:49.253 "reset": true, 00:07:49.253 "nvme_admin": false, 00:07:49.253 "nvme_io": false, 00:07:49.253 "nvme_io_md": false, 00:07:49.253 "write_zeroes": true, 00:07:49.253 "zcopy": false, 00:07:49.253 "get_zone_info": false, 00:07:49.253 "zone_management": false, 00:07:49.253 "zone_append": false, 00:07:49.253 "compare": false, 00:07:49.253 "compare_and_write": false, 00:07:49.253 "abort": false, 00:07:49.253 "seek_hole": false, 00:07:49.253 "seek_data": false, 00:07:49.253 "copy": false, 00:07:49.253 "nvme_iov_md": false 00:07:49.253 }, 00:07:49.253 "memory_domains": [ 00:07:49.253 { 00:07:49.253 "dma_device_id": "system", 00:07:49.253 "dma_device_type": 1 00:07:49.253 }, 00:07:49.253 { 00:07:49.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.253 "dma_device_type": 2 00:07:49.253 }, 00:07:49.253 { 00:07:49.253 "dma_device_id": "system", 00:07:49.253 "dma_device_type": 1 00:07:49.253 }, 00:07:49.253 { 00:07:49.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.253 "dma_device_type": 2 00:07:49.253 } 00:07:49.253 ], 00:07:49.253 "driver_specific": { 00:07:49.253 "raid": { 00:07:49.253 "uuid": "568c3fb6-41d3-40ad-ac54-a054bf31ad2e", 00:07:49.253 "strip_size_kb": 64, 00:07:49.253 "state": "online", 00:07:49.253 "raid_level": "concat", 00:07:49.253 "superblock": true, 00:07:49.253 "num_base_bdevs": 2, 00:07:49.253 "num_base_bdevs_discovered": 2, 00:07:49.253 "num_base_bdevs_operational": 2, 00:07:49.253 "base_bdevs_list": [ 00:07:49.253 { 00:07:49.253 "name": "pt1", 00:07:49.253 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:49.253 "is_configured": true, 00:07:49.253 "data_offset": 2048, 00:07:49.253 "data_size": 63488 00:07:49.253 }, 00:07:49.253 { 00:07:49.253 "name": "pt2", 00:07:49.253 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:49.253 "is_configured": true, 00:07:49.253 "data_offset": 2048, 00:07:49.253 "data_size": 63488 00:07:49.253 } 00:07:49.253 ] 00:07:49.253 } 00:07:49.253 } 00:07:49.253 }' 00:07:49.253 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:49.512 pt2' 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.512 [2024-11-15 10:52:56.363040] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=568c3fb6-41d3-40ad-ac54-a054bf31ad2e 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 568c3fb6-41d3-40ad-ac54-a054bf31ad2e ']' 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.512 [2024-11-15 10:52:56.406632] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:49.512 [2024-11-15 10:52:56.406662] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:49.512 [2024-11-15 10:52:56.406743] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:49.512 [2024-11-15 10:52:56.406792] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:49.512 [2024-11-15 10:52:56.406808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.512 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.772 [2024-11-15 10:52:56.518490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:49.772 [2024-11-15 10:52:56.520520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:49.772 [2024-11-15 10:52:56.520593] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:49.772 [2024-11-15 10:52:56.520663] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:49.772 [2024-11-15 10:52:56.520683] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:49.772 [2024-11-15 10:52:56.520694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:49.772 request: 00:07:49.772 { 00:07:49.772 "name": "raid_bdev1", 00:07:49.772 "raid_level": "concat", 00:07:49.772 "base_bdevs": [ 00:07:49.772 "malloc1", 00:07:49.772 "malloc2" 00:07:49.772 ], 00:07:49.772 "strip_size_kb": 64, 00:07:49.772 "superblock": false, 00:07:49.772 "method": "bdev_raid_create", 00:07:49.772 "req_id": 1 00:07:49.772 } 00:07:49.772 Got JSON-RPC error response 00:07:49.772 response: 00:07:49.772 { 00:07:49.772 "code": -17, 00:07:49.772 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:49.772 } 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.772 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.773 [2024-11-15 10:52:56.586426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:49.773 [2024-11-15 10:52:56.586499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.773 [2024-11-15 10:52:56.586519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:49.773 [2024-11-15 10:52:56.586530] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.773 [2024-11-15 10:52:56.588733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.773 [2024-11-15 10:52:56.588775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:49.773 [2024-11-15 10:52:56.588863] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:49.773 [2024-11-15 10:52:56.588930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:49.773 pt1 00:07:49.773 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.773 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:49.773 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:49.773 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:49.773 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:49.773 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.773 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.773 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.773 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.773 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.773 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.773 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.773 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.773 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.773 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:49.773 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.773 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.773 "name": "raid_bdev1", 00:07:49.773 "uuid": "568c3fb6-41d3-40ad-ac54-a054bf31ad2e", 00:07:49.773 "strip_size_kb": 64, 00:07:49.773 "state": "configuring", 00:07:49.773 "raid_level": "concat", 00:07:49.773 "superblock": true, 00:07:49.773 "num_base_bdevs": 2, 00:07:49.773 "num_base_bdevs_discovered": 1, 00:07:49.773 "num_base_bdevs_operational": 2, 00:07:49.773 "base_bdevs_list": [ 00:07:49.773 { 00:07:49.773 "name": "pt1", 00:07:49.773 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:49.773 "is_configured": true, 00:07:49.773 "data_offset": 2048, 00:07:49.773 "data_size": 63488 00:07:49.773 }, 00:07:49.773 { 00:07:49.773 "name": null, 00:07:49.773 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:49.773 "is_configured": false, 00:07:49.773 "data_offset": 2048, 00:07:49.773 "data_size": 63488 00:07:49.773 } 00:07:49.773 ] 00:07:49.773 }' 00:07:49.773 10:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.773 10:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.342 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:50.342 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:50.342 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:50.342 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:50.342 10:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.342 10:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.342 [2024-11-15 10:52:57.053637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:50.342 [2024-11-15 10:52:57.053720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.342 [2024-11-15 10:52:57.053742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:50.342 [2024-11-15 10:52:57.053754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.342 [2024-11-15 10:52:57.054253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.342 [2024-11-15 10:52:57.054290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:50.342 [2024-11-15 10:52:57.054398] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:50.342 [2024-11-15 10:52:57.054436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:50.342 [2024-11-15 10:52:57.054566] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:50.342 [2024-11-15 10:52:57.054586] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:50.342 [2024-11-15 10:52:57.054843] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:50.342 [2024-11-15 10:52:57.055017] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:50.342 [2024-11-15 10:52:57.055035] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:50.342 [2024-11-15 10:52:57.055198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.342 pt2 00:07:50.342 10:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.342 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:50.342 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:50.342 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:50.342 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.342 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.342 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:50.342 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.342 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.342 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.342 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.342 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.342 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.342 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.342 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.342 10:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.342 10:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.342 10:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.342 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.342 "name": "raid_bdev1", 00:07:50.342 "uuid": "568c3fb6-41d3-40ad-ac54-a054bf31ad2e", 00:07:50.342 "strip_size_kb": 64, 00:07:50.342 "state": "online", 00:07:50.342 "raid_level": "concat", 00:07:50.342 "superblock": true, 00:07:50.342 "num_base_bdevs": 2, 00:07:50.342 "num_base_bdevs_discovered": 2, 00:07:50.342 "num_base_bdevs_operational": 2, 00:07:50.342 "base_bdevs_list": [ 00:07:50.342 { 00:07:50.342 "name": "pt1", 00:07:50.342 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:50.342 "is_configured": true, 00:07:50.342 "data_offset": 2048, 00:07:50.342 "data_size": 63488 00:07:50.342 }, 00:07:50.342 { 00:07:50.342 "name": "pt2", 00:07:50.342 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:50.342 "is_configured": true, 00:07:50.342 "data_offset": 2048, 00:07:50.342 "data_size": 63488 00:07:50.342 } 00:07:50.342 ] 00:07:50.342 }' 00:07:50.342 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.342 10:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.911 [2024-11-15 10:52:57.549045] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:50.911 "name": "raid_bdev1", 00:07:50.911 "aliases": [ 00:07:50.911 "568c3fb6-41d3-40ad-ac54-a054bf31ad2e" 00:07:50.911 ], 00:07:50.911 "product_name": "Raid Volume", 00:07:50.911 "block_size": 512, 00:07:50.911 "num_blocks": 126976, 00:07:50.911 "uuid": "568c3fb6-41d3-40ad-ac54-a054bf31ad2e", 00:07:50.911 "assigned_rate_limits": { 00:07:50.911 "rw_ios_per_sec": 0, 00:07:50.911 "rw_mbytes_per_sec": 0, 00:07:50.911 "r_mbytes_per_sec": 0, 00:07:50.911 "w_mbytes_per_sec": 0 00:07:50.911 }, 00:07:50.911 "claimed": false, 00:07:50.911 "zoned": false, 00:07:50.911 "supported_io_types": { 00:07:50.911 "read": true, 00:07:50.911 "write": true, 00:07:50.911 "unmap": true, 00:07:50.911 "flush": true, 00:07:50.911 "reset": true, 00:07:50.911 "nvme_admin": false, 00:07:50.911 "nvme_io": false, 00:07:50.911 "nvme_io_md": false, 00:07:50.911 "write_zeroes": true, 00:07:50.911 "zcopy": false, 00:07:50.911 "get_zone_info": false, 00:07:50.911 "zone_management": false, 00:07:50.911 "zone_append": false, 00:07:50.911 "compare": false, 00:07:50.911 "compare_and_write": false, 00:07:50.911 "abort": false, 00:07:50.911 "seek_hole": false, 00:07:50.911 "seek_data": false, 00:07:50.911 "copy": false, 00:07:50.911 "nvme_iov_md": false 00:07:50.911 }, 00:07:50.911 "memory_domains": [ 00:07:50.911 { 00:07:50.911 "dma_device_id": "system", 00:07:50.911 "dma_device_type": 1 00:07:50.911 }, 00:07:50.911 { 00:07:50.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.911 "dma_device_type": 2 00:07:50.911 }, 00:07:50.911 { 00:07:50.911 "dma_device_id": "system", 00:07:50.911 "dma_device_type": 1 00:07:50.911 }, 00:07:50.911 { 00:07:50.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.911 "dma_device_type": 2 00:07:50.911 } 00:07:50.911 ], 00:07:50.911 "driver_specific": { 00:07:50.911 "raid": { 00:07:50.911 "uuid": "568c3fb6-41d3-40ad-ac54-a054bf31ad2e", 00:07:50.911 "strip_size_kb": 64, 00:07:50.911 "state": "online", 00:07:50.911 "raid_level": "concat", 00:07:50.911 "superblock": true, 00:07:50.911 "num_base_bdevs": 2, 00:07:50.911 "num_base_bdevs_discovered": 2, 00:07:50.911 "num_base_bdevs_operational": 2, 00:07:50.911 "base_bdevs_list": [ 00:07:50.911 { 00:07:50.911 "name": "pt1", 00:07:50.911 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:50.911 "is_configured": true, 00:07:50.911 "data_offset": 2048, 00:07:50.911 "data_size": 63488 00:07:50.911 }, 00:07:50.911 { 00:07:50.911 "name": "pt2", 00:07:50.911 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:50.911 "is_configured": true, 00:07:50.911 "data_offset": 2048, 00:07:50.911 "data_size": 63488 00:07:50.911 } 00:07:50.911 ] 00:07:50.911 } 00:07:50.911 } 00:07:50.911 }' 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:50.911 pt2' 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.911 [2024-11-15 10:52:57.780644] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 568c3fb6-41d3-40ad-ac54-a054bf31ad2e '!=' 568c3fb6-41d3-40ad-ac54-a054bf31ad2e ']' 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62348 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 62348 ']' 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 62348 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:50.911 10:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62348 00:07:51.171 killing process with pid 62348 00:07:51.171 10:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:51.171 10:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:51.171 10:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62348' 00:07:51.171 10:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 62348 00:07:51.171 [2024-11-15 10:52:57.855548] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:51.171 [2024-11-15 10:52:57.855651] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:51.171 [2024-11-15 10:52:57.855702] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:51.171 [2024-11-15 10:52:57.855714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, sta 10:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 62348 00:07:51.171 te offline 00:07:51.171 [2024-11-15 10:52:58.076627] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:52.550 10:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:52.550 00:07:52.550 real 0m4.645s 00:07:52.550 user 0m6.517s 00:07:52.550 sys 0m0.791s 00:07:52.550 10:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:52.550 10:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.550 ************************************ 00:07:52.550 END TEST raid_superblock_test 00:07:52.550 ************************************ 00:07:52.550 10:52:59 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:52.550 10:52:59 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:52.550 10:52:59 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:52.550 10:52:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:52.550 ************************************ 00:07:52.550 START TEST raid_read_error_test 00:07:52.550 ************************************ 00:07:52.550 10:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 read 00:07:52.550 10:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:52.550 10:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:52.550 10:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:52.550 10:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:52.550 10:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:52.550 10:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:52.550 10:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:52.550 10:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:52.550 10:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:52.550 10:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:52.550 10:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:52.550 10:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:52.550 10:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:52.550 10:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:52.550 10:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:52.550 10:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:52.550 10:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:52.550 10:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:52.551 10:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:52.551 10:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:52.551 10:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:52.551 10:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:52.551 10:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dYBk94S2nM 00:07:52.551 10:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62554 00:07:52.551 10:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62554 00:07:52.551 10:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:52.551 10:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 62554 ']' 00:07:52.551 10:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.551 10:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:52.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.551 10:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.551 10:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:52.551 10:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.551 [2024-11-15 10:52:59.443126] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:07:52.551 [2024-11-15 10:52:59.443273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62554 ] 00:07:52.808 [2024-11-15 10:52:59.626547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.066 [2024-11-15 10:52:59.745400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.066 [2024-11-15 10:52:59.948813] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.066 [2024-11-15 10:52:59.948861] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.633 BaseBdev1_malloc 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.633 true 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.633 [2024-11-15 10:53:00.358429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:53.633 [2024-11-15 10:53:00.358480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.633 [2024-11-15 10:53:00.358500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:53.633 [2024-11-15 10:53:00.358510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.633 [2024-11-15 10:53:00.360611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.633 [2024-11-15 10:53:00.360651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:53.633 BaseBdev1 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.633 BaseBdev2_malloc 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.633 true 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.633 [2024-11-15 10:53:00.427098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:53.633 [2024-11-15 10:53:00.427154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.633 [2024-11-15 10:53:00.427171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:53.633 [2024-11-15 10:53:00.427182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.633 [2024-11-15 10:53:00.429381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.633 [2024-11-15 10:53:00.429416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:53.633 BaseBdev2 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.633 10:53:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.633 [2024-11-15 10:53:00.439163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:53.634 [2024-11-15 10:53:00.441204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:53.634 [2024-11-15 10:53:00.441432] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:53.634 [2024-11-15 10:53:00.441456] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:53.634 [2024-11-15 10:53:00.441716] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:53.634 [2024-11-15 10:53:00.441927] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:53.634 [2024-11-15 10:53:00.441949] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:53.634 [2024-11-15 10:53:00.442137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.634 10:53:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.634 10:53:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:53.634 10:53:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:53.634 10:53:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:53.634 10:53:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:53.634 10:53:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.634 10:53:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:53.634 10:53:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.634 10:53:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.634 10:53:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.634 10:53:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.634 10:53:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.634 10:53:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.634 10:53:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.634 10:53:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:53.634 10:53:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.634 10:53:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.634 "name": "raid_bdev1", 00:07:53.634 "uuid": "2c412f4e-a097-4376-b4cf-03f65e469e23", 00:07:53.634 "strip_size_kb": 64, 00:07:53.634 "state": "online", 00:07:53.634 "raid_level": "concat", 00:07:53.634 "superblock": true, 00:07:53.634 "num_base_bdevs": 2, 00:07:53.634 "num_base_bdevs_discovered": 2, 00:07:53.634 "num_base_bdevs_operational": 2, 00:07:53.634 "base_bdevs_list": [ 00:07:53.634 { 00:07:53.634 "name": "BaseBdev1", 00:07:53.634 "uuid": "3906e00f-858d-5b04-8352-b974142f2ab3", 00:07:53.634 "is_configured": true, 00:07:53.634 "data_offset": 2048, 00:07:53.634 "data_size": 63488 00:07:53.634 }, 00:07:53.634 { 00:07:53.634 "name": "BaseBdev2", 00:07:53.634 "uuid": "0cb287ee-bbc8-50ed-ac7c-be46d3f4755a", 00:07:53.634 "is_configured": true, 00:07:53.634 "data_offset": 2048, 00:07:53.634 "data_size": 63488 00:07:53.634 } 00:07:53.634 ] 00:07:53.634 }' 00:07:53.634 10:53:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.634 10:53:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.202 10:53:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:54.202 10:53:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:54.202 [2024-11-15 10:53:00.979594] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:55.140 10:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:55.140 10:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.140 10:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.140 10:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.140 10:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:55.140 10:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:55.140 10:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:55.140 10:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:55.140 10:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.140 10:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.140 10:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:55.140 10:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.140 10:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.140 10:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.140 10:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.140 10:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.140 10:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.140 10:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.140 10:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.140 10:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.140 10:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.140 10:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.140 10:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.140 "name": "raid_bdev1", 00:07:55.140 "uuid": "2c412f4e-a097-4376-b4cf-03f65e469e23", 00:07:55.140 "strip_size_kb": 64, 00:07:55.140 "state": "online", 00:07:55.140 "raid_level": "concat", 00:07:55.140 "superblock": true, 00:07:55.140 "num_base_bdevs": 2, 00:07:55.140 "num_base_bdevs_discovered": 2, 00:07:55.140 "num_base_bdevs_operational": 2, 00:07:55.140 "base_bdevs_list": [ 00:07:55.140 { 00:07:55.140 "name": "BaseBdev1", 00:07:55.140 "uuid": "3906e00f-858d-5b04-8352-b974142f2ab3", 00:07:55.140 "is_configured": true, 00:07:55.140 "data_offset": 2048, 00:07:55.140 "data_size": 63488 00:07:55.140 }, 00:07:55.140 { 00:07:55.140 "name": "BaseBdev2", 00:07:55.140 "uuid": "0cb287ee-bbc8-50ed-ac7c-be46d3f4755a", 00:07:55.140 "is_configured": true, 00:07:55.140 "data_offset": 2048, 00:07:55.140 "data_size": 63488 00:07:55.140 } 00:07:55.140 ] 00:07:55.140 }' 00:07:55.140 10:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.140 10:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.709 10:53:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:55.709 10:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.709 10:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.709 [2024-11-15 10:53:02.343532] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:55.709 [2024-11-15 10:53:02.343578] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:55.709 [2024-11-15 10:53:02.346499] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:55.709 [2024-11-15 10:53:02.346545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.709 [2024-11-15 10:53:02.346578] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:55.709 [2024-11-15 10:53:02.346593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:55.709 { 00:07:55.709 "results": [ 00:07:55.709 { 00:07:55.709 "job": "raid_bdev1", 00:07:55.709 "core_mask": "0x1", 00:07:55.709 "workload": "randrw", 00:07:55.709 "percentage": 50, 00:07:55.709 "status": "finished", 00:07:55.709 "queue_depth": 1, 00:07:55.709 "io_size": 131072, 00:07:55.709 "runtime": 1.364798, 00:07:55.709 "iops": 15540.761343436905, 00:07:55.709 "mibps": 1942.5951679296131, 00:07:55.709 "io_failed": 1, 00:07:55.709 "io_timeout": 0, 00:07:55.709 "avg_latency_us": 89.2375231686451, 00:07:55.709 "min_latency_us": 26.270742358078603, 00:07:55.709 "max_latency_us": 1674.172925764192 00:07:55.709 } 00:07:55.709 ], 00:07:55.709 "core_count": 1 00:07:55.709 } 00:07:55.709 10:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.709 10:53:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62554 00:07:55.709 10:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 62554 ']' 00:07:55.709 10:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 62554 00:07:55.709 10:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:07:55.709 10:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:55.709 10:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62554 00:07:55.709 10:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:55.709 10:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:55.709 killing process with pid 62554 00:07:55.709 10:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62554' 00:07:55.709 10:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 62554 00:07:55.709 [2024-11-15 10:53:02.383355] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:55.709 10:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 62554 00:07:55.709 [2024-11-15 10:53:02.528927] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:57.114 10:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dYBk94S2nM 00:07:57.114 10:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:57.114 10:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:57.115 10:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:57.115 10:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:57.115 10:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:57.115 10:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:57.115 10:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:57.115 00:07:57.115 real 0m4.417s 00:07:57.115 user 0m5.301s 00:07:57.115 sys 0m0.545s 00:07:57.115 10:53:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:57.115 10:53:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.115 ************************************ 00:07:57.115 END TEST raid_read_error_test 00:07:57.115 ************************************ 00:07:57.115 10:53:03 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:57.115 10:53:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:57.115 10:53:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:57.115 10:53:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:57.115 ************************************ 00:07:57.115 START TEST raid_write_error_test 00:07:57.115 ************************************ 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 write 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.jSaP6YzNhf 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62705 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62705 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 62705 ']' 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:57.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:57.115 10:53:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.115 [2024-11-15 10:53:03.910958] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:07:57.115 [2024-11-15 10:53:03.911097] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62705 ] 00:07:57.375 [2024-11-15 10:53:04.087327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.375 [2024-11-15 10:53:04.203109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.635 [2024-11-15 10:53:04.418939] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.635 [2024-11-15 10:53:04.419017] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.895 10:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:57.895 10:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:07:57.895 10:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:57.895 10:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:57.895 10:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.895 10:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.895 BaseBdev1_malloc 00:07:57.895 10:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.895 10:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:57.895 10:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.895 10:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.895 true 00:07:57.895 10:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.895 10:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:57.895 10:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.895 10:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.895 [2024-11-15 10:53:04.808616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:57.895 [2024-11-15 10:53:04.808678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.895 [2024-11-15 10:53:04.808701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:57.895 [2024-11-15 10:53:04.808712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.895 [2024-11-15 10:53:04.810853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.895 [2024-11-15 10:53:04.810893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:57.895 BaseBdev1 00:07:57.895 10:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.895 10:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:57.895 10:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:57.895 10:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.895 10:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.155 BaseBdev2_malloc 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.155 true 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.155 [2024-11-15 10:53:04.877997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:58.155 [2024-11-15 10:53:04.878055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.155 [2024-11-15 10:53:04.878074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:58.155 [2024-11-15 10:53:04.878086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.155 [2024-11-15 10:53:04.880236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.155 [2024-11-15 10:53:04.880282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:58.155 BaseBdev2 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.155 [2024-11-15 10:53:04.890034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:58.155 [2024-11-15 10:53:04.891943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:58.155 [2024-11-15 10:53:04.892156] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:58.155 [2024-11-15 10:53:04.892181] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:58.155 [2024-11-15 10:53:04.892453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:58.155 [2024-11-15 10:53:04.892664] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:58.155 [2024-11-15 10:53:04.892685] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:58.155 [2024-11-15 10:53:04.892862] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.155 "name": "raid_bdev1", 00:07:58.155 "uuid": "a07846f2-fcef-4c77-bc6c-3e30e10f590a", 00:07:58.155 "strip_size_kb": 64, 00:07:58.155 "state": "online", 00:07:58.155 "raid_level": "concat", 00:07:58.155 "superblock": true, 00:07:58.155 "num_base_bdevs": 2, 00:07:58.155 "num_base_bdevs_discovered": 2, 00:07:58.155 "num_base_bdevs_operational": 2, 00:07:58.155 "base_bdevs_list": [ 00:07:58.155 { 00:07:58.155 "name": "BaseBdev1", 00:07:58.155 "uuid": "d58c98f0-1a71-55c0-b2ab-55ae0b322cb3", 00:07:58.155 "is_configured": true, 00:07:58.155 "data_offset": 2048, 00:07:58.155 "data_size": 63488 00:07:58.155 }, 00:07:58.155 { 00:07:58.155 "name": "BaseBdev2", 00:07:58.155 "uuid": "fa505311-e56f-5e00-9503-3257482294c6", 00:07:58.155 "is_configured": true, 00:07:58.155 "data_offset": 2048, 00:07:58.155 "data_size": 63488 00:07:58.155 } 00:07:58.155 ] 00:07:58.155 }' 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.155 10:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.414 10:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:58.414 10:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:58.674 [2024-11-15 10:53:05.426663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:59.613 10:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:59.613 10:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.613 10:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.613 10:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.613 10:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:59.613 10:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:59.613 10:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:59.613 10:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:59.613 10:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.613 10:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:59.613 10:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:59.613 10:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.613 10:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.613 10:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.613 10:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.613 10:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.613 10:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.613 10:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.613 10:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.613 10:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:59.613 10:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.613 10:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.613 10:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.613 "name": "raid_bdev1", 00:07:59.613 "uuid": "a07846f2-fcef-4c77-bc6c-3e30e10f590a", 00:07:59.613 "strip_size_kb": 64, 00:07:59.613 "state": "online", 00:07:59.613 "raid_level": "concat", 00:07:59.613 "superblock": true, 00:07:59.613 "num_base_bdevs": 2, 00:07:59.613 "num_base_bdevs_discovered": 2, 00:07:59.613 "num_base_bdevs_operational": 2, 00:07:59.613 "base_bdevs_list": [ 00:07:59.613 { 00:07:59.613 "name": "BaseBdev1", 00:07:59.613 "uuid": "d58c98f0-1a71-55c0-b2ab-55ae0b322cb3", 00:07:59.613 "is_configured": true, 00:07:59.613 "data_offset": 2048, 00:07:59.613 "data_size": 63488 00:07:59.613 }, 00:07:59.613 { 00:07:59.613 "name": "BaseBdev2", 00:07:59.613 "uuid": "fa505311-e56f-5e00-9503-3257482294c6", 00:07:59.613 "is_configured": true, 00:07:59.613 "data_offset": 2048, 00:07:59.613 "data_size": 63488 00:07:59.613 } 00:07:59.613 ] 00:07:59.613 }' 00:07:59.613 10:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.613 10:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.872 10:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:59.872 10:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.872 10:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.872 [2024-11-15 10:53:06.778660] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:59.872 [2024-11-15 10:53:06.778705] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.872 [2024-11-15 10:53:06.782045] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.872 [2024-11-15 10:53:06.782106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:59.872 [2024-11-15 10:53:06.782143] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:59.872 [2024-11-15 10:53:06.782160] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:59.872 { 00:07:59.872 "results": [ 00:07:59.872 { 00:07:59.872 "job": "raid_bdev1", 00:07:59.872 "core_mask": "0x1", 00:07:59.872 "workload": "randrw", 00:07:59.872 "percentage": 50, 00:07:59.872 "status": "finished", 00:07:59.872 "queue_depth": 1, 00:07:59.872 "io_size": 131072, 00:07:59.872 "runtime": 1.352698, 00:07:59.872 "iops": 15377.416097310708, 00:07:59.872 "mibps": 1922.1770121638385, 00:07:59.872 "io_failed": 1, 00:07:59.872 "io_timeout": 0, 00:07:59.872 "avg_latency_us": 90.15262136786478, 00:07:59.872 "min_latency_us": 26.270742358078603, 00:07:59.872 "max_latency_us": 1509.6174672489083 00:07:59.872 } 00:07:59.872 ], 00:07:59.872 "core_count": 1 00:07:59.872 } 00:07:59.872 10:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.872 10:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62705 00:07:59.872 10:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 62705 ']' 00:07:59.872 10:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 62705 00:07:59.872 10:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:07:59.872 10:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:00.131 10:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62705 00:08:00.131 10:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:00.131 10:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:00.131 killing process with pid 62705 00:08:00.131 10:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62705' 00:08:00.131 10:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 62705 00:08:00.131 [2024-11-15 10:53:06.833115] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:00.131 10:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 62705 00:08:00.131 [2024-11-15 10:53:06.969403] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:01.510 10:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:01.510 10:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:01.510 10:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.jSaP6YzNhf 00:08:01.510 10:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:01.510 10:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:01.510 10:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:01.510 10:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:01.510 10:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:01.510 00:08:01.510 real 0m4.367s 00:08:01.510 user 0m5.208s 00:08:01.510 sys 0m0.563s 00:08:01.510 10:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:01.510 10:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.510 ************************************ 00:08:01.510 END TEST raid_write_error_test 00:08:01.510 ************************************ 00:08:01.510 10:53:08 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:01.510 10:53:08 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:01.510 10:53:08 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:01.510 10:53:08 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:01.510 10:53:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:01.510 ************************************ 00:08:01.510 START TEST raid_state_function_test 00:08:01.510 ************************************ 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 false 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62843 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62843' 00:08:01.510 Process raid pid: 62843 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62843 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 62843 ']' 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:01.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:01.510 10:53:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.510 [2024-11-15 10:53:08.341486] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:08:01.510 [2024-11-15 10:53:08.341617] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.772 [2024-11-15 10:53:08.501358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.772 [2024-11-15 10:53:08.617865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.031 [2024-11-15 10:53:08.833820] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.031 [2024-11-15 10:53:08.833873] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.291 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:02.291 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:08:02.291 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:02.291 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.291 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.291 [2024-11-15 10:53:09.202118] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:02.291 [2024-11-15 10:53:09.202178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:02.291 [2024-11-15 10:53:09.202191] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:02.291 [2024-11-15 10:53:09.202203] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:02.291 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.291 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:02.291 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.291 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.291 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:02.291 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:02.291 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.291 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.291 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.291 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.291 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.291 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.291 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.291 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.291 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.550 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.550 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.550 "name": "Existed_Raid", 00:08:02.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.550 "strip_size_kb": 0, 00:08:02.550 "state": "configuring", 00:08:02.550 "raid_level": "raid1", 00:08:02.550 "superblock": false, 00:08:02.550 "num_base_bdevs": 2, 00:08:02.550 "num_base_bdevs_discovered": 0, 00:08:02.550 "num_base_bdevs_operational": 2, 00:08:02.550 "base_bdevs_list": [ 00:08:02.550 { 00:08:02.550 "name": "BaseBdev1", 00:08:02.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.550 "is_configured": false, 00:08:02.550 "data_offset": 0, 00:08:02.550 "data_size": 0 00:08:02.550 }, 00:08:02.550 { 00:08:02.550 "name": "BaseBdev2", 00:08:02.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.550 "is_configured": false, 00:08:02.550 "data_offset": 0, 00:08:02.550 "data_size": 0 00:08:02.550 } 00:08:02.550 ] 00:08:02.550 }' 00:08:02.550 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.550 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.808 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:02.808 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.808 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.808 [2024-11-15 10:53:09.665299] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:02.808 [2024-11-15 10:53:09.665358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:02.808 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.808 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:02.808 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.808 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.808 [2024-11-15 10:53:09.677271] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:02.808 [2024-11-15 10:53:09.677330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:02.808 [2024-11-15 10:53:09.677341] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:02.808 [2024-11-15 10:53:09.677353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:02.808 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.808 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:02.808 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.808 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.808 [2024-11-15 10:53:09.728429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:02.808 BaseBdev1 00:08:02.808 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.808 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:02.808 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:02.808 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:02.808 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:02.808 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:02.808 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:02.808 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:02.808 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.808 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.101 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.101 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:03.101 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.101 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.101 [ 00:08:03.101 { 00:08:03.101 "name": "BaseBdev1", 00:08:03.101 "aliases": [ 00:08:03.101 "5eb7dfd7-69a4-4ee7-8611-9c53d78e4814" 00:08:03.101 ], 00:08:03.101 "product_name": "Malloc disk", 00:08:03.101 "block_size": 512, 00:08:03.101 "num_blocks": 65536, 00:08:03.101 "uuid": "5eb7dfd7-69a4-4ee7-8611-9c53d78e4814", 00:08:03.101 "assigned_rate_limits": { 00:08:03.101 "rw_ios_per_sec": 0, 00:08:03.101 "rw_mbytes_per_sec": 0, 00:08:03.101 "r_mbytes_per_sec": 0, 00:08:03.101 "w_mbytes_per_sec": 0 00:08:03.101 }, 00:08:03.101 "claimed": true, 00:08:03.101 "claim_type": "exclusive_write", 00:08:03.101 "zoned": false, 00:08:03.101 "supported_io_types": { 00:08:03.101 "read": true, 00:08:03.101 "write": true, 00:08:03.101 "unmap": true, 00:08:03.101 "flush": true, 00:08:03.101 "reset": true, 00:08:03.101 "nvme_admin": false, 00:08:03.101 "nvme_io": false, 00:08:03.101 "nvme_io_md": false, 00:08:03.101 "write_zeroes": true, 00:08:03.101 "zcopy": true, 00:08:03.101 "get_zone_info": false, 00:08:03.101 "zone_management": false, 00:08:03.101 "zone_append": false, 00:08:03.101 "compare": false, 00:08:03.101 "compare_and_write": false, 00:08:03.101 "abort": true, 00:08:03.101 "seek_hole": false, 00:08:03.101 "seek_data": false, 00:08:03.101 "copy": true, 00:08:03.102 "nvme_iov_md": false 00:08:03.102 }, 00:08:03.102 "memory_domains": [ 00:08:03.102 { 00:08:03.102 "dma_device_id": "system", 00:08:03.102 "dma_device_type": 1 00:08:03.102 }, 00:08:03.102 { 00:08:03.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.102 "dma_device_type": 2 00:08:03.102 } 00:08:03.102 ], 00:08:03.102 "driver_specific": {} 00:08:03.102 } 00:08:03.102 ] 00:08:03.102 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.102 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:03.102 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:03.102 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.102 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.102 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:03.102 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:03.102 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.102 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.102 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.102 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.102 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.102 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.102 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.102 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.102 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.102 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.102 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.102 "name": "Existed_Raid", 00:08:03.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.102 "strip_size_kb": 0, 00:08:03.102 "state": "configuring", 00:08:03.102 "raid_level": "raid1", 00:08:03.102 "superblock": false, 00:08:03.102 "num_base_bdevs": 2, 00:08:03.102 "num_base_bdevs_discovered": 1, 00:08:03.102 "num_base_bdevs_operational": 2, 00:08:03.102 "base_bdevs_list": [ 00:08:03.102 { 00:08:03.102 "name": "BaseBdev1", 00:08:03.102 "uuid": "5eb7dfd7-69a4-4ee7-8611-9c53d78e4814", 00:08:03.102 "is_configured": true, 00:08:03.102 "data_offset": 0, 00:08:03.102 "data_size": 65536 00:08:03.102 }, 00:08:03.102 { 00:08:03.102 "name": "BaseBdev2", 00:08:03.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.102 "is_configured": false, 00:08:03.102 "data_offset": 0, 00:08:03.102 "data_size": 0 00:08:03.102 } 00:08:03.102 ] 00:08:03.102 }' 00:08:03.102 10:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.102 10:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.361 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:03.361 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.361 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.361 [2024-11-15 10:53:10.243608] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:03.361 [2024-11-15 10:53:10.243670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:03.361 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.362 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:03.362 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.362 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.362 [2024-11-15 10:53:10.251633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:03.362 [2024-11-15 10:53:10.253635] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:03.362 [2024-11-15 10:53:10.253677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:03.362 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.362 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:03.362 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:03.362 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:03.362 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.362 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.362 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:03.362 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:03.362 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.362 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.362 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.362 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.362 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.362 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.362 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.362 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.362 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.362 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.621 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.621 "name": "Existed_Raid", 00:08:03.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.622 "strip_size_kb": 0, 00:08:03.622 "state": "configuring", 00:08:03.622 "raid_level": "raid1", 00:08:03.622 "superblock": false, 00:08:03.622 "num_base_bdevs": 2, 00:08:03.622 "num_base_bdevs_discovered": 1, 00:08:03.622 "num_base_bdevs_operational": 2, 00:08:03.622 "base_bdevs_list": [ 00:08:03.622 { 00:08:03.622 "name": "BaseBdev1", 00:08:03.622 "uuid": "5eb7dfd7-69a4-4ee7-8611-9c53d78e4814", 00:08:03.622 "is_configured": true, 00:08:03.622 "data_offset": 0, 00:08:03.622 "data_size": 65536 00:08:03.622 }, 00:08:03.622 { 00:08:03.622 "name": "BaseBdev2", 00:08:03.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.622 "is_configured": false, 00:08:03.622 "data_offset": 0, 00:08:03.622 "data_size": 0 00:08:03.622 } 00:08:03.622 ] 00:08:03.622 }' 00:08:03.622 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.622 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.880 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:03.880 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.880 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.880 [2024-11-15 10:53:10.763109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:03.880 [2024-11-15 10:53:10.763172] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:03.880 [2024-11-15 10:53:10.763182] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:03.880 [2024-11-15 10:53:10.763468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:03.880 [2024-11-15 10:53:10.763657] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:03.880 [2024-11-15 10:53:10.763682] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:03.880 [2024-11-15 10:53:10.763977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.880 BaseBdev2 00:08:03.880 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.880 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:03.880 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:03.880 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:03.880 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:03.880 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:03.880 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:03.880 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:03.880 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.880 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.880 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.880 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:03.880 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.880 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.880 [ 00:08:03.880 { 00:08:03.880 "name": "BaseBdev2", 00:08:03.880 "aliases": [ 00:08:03.880 "4caffd80-2718-4c2e-8d79-9983fcfaad79" 00:08:03.880 ], 00:08:03.880 "product_name": "Malloc disk", 00:08:03.880 "block_size": 512, 00:08:03.880 "num_blocks": 65536, 00:08:03.880 "uuid": "4caffd80-2718-4c2e-8d79-9983fcfaad79", 00:08:03.880 "assigned_rate_limits": { 00:08:03.880 "rw_ios_per_sec": 0, 00:08:03.881 "rw_mbytes_per_sec": 0, 00:08:03.881 "r_mbytes_per_sec": 0, 00:08:03.881 "w_mbytes_per_sec": 0 00:08:03.881 }, 00:08:03.881 "claimed": true, 00:08:03.881 "claim_type": "exclusive_write", 00:08:03.881 "zoned": false, 00:08:03.881 "supported_io_types": { 00:08:03.881 "read": true, 00:08:03.881 "write": true, 00:08:03.881 "unmap": true, 00:08:03.881 "flush": true, 00:08:03.881 "reset": true, 00:08:03.881 "nvme_admin": false, 00:08:03.881 "nvme_io": false, 00:08:03.881 "nvme_io_md": false, 00:08:03.881 "write_zeroes": true, 00:08:03.881 "zcopy": true, 00:08:03.881 "get_zone_info": false, 00:08:03.881 "zone_management": false, 00:08:03.881 "zone_append": false, 00:08:03.881 "compare": false, 00:08:03.881 "compare_and_write": false, 00:08:03.881 "abort": true, 00:08:03.881 "seek_hole": false, 00:08:03.881 "seek_data": false, 00:08:03.881 "copy": true, 00:08:03.881 "nvme_iov_md": false 00:08:03.881 }, 00:08:03.881 "memory_domains": [ 00:08:03.881 { 00:08:03.881 "dma_device_id": "system", 00:08:03.881 "dma_device_type": 1 00:08:03.881 }, 00:08:03.881 { 00:08:03.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.881 "dma_device_type": 2 00:08:03.881 } 00:08:03.881 ], 00:08:03.881 "driver_specific": {} 00:08:03.881 } 00:08:03.881 ] 00:08:03.881 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.881 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:03.881 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:03.881 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:03.881 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:03.881 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.881 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:03.881 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:03.881 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:03.881 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.881 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.881 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.881 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.881 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.881 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.881 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.881 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.881 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.140 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.140 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.140 "name": "Existed_Raid", 00:08:04.140 "uuid": "294765da-3032-431f-9579-b0bf3e0fdd12", 00:08:04.140 "strip_size_kb": 0, 00:08:04.140 "state": "online", 00:08:04.140 "raid_level": "raid1", 00:08:04.140 "superblock": false, 00:08:04.140 "num_base_bdevs": 2, 00:08:04.140 "num_base_bdevs_discovered": 2, 00:08:04.140 "num_base_bdevs_operational": 2, 00:08:04.140 "base_bdevs_list": [ 00:08:04.140 { 00:08:04.140 "name": "BaseBdev1", 00:08:04.140 "uuid": "5eb7dfd7-69a4-4ee7-8611-9c53d78e4814", 00:08:04.140 "is_configured": true, 00:08:04.140 "data_offset": 0, 00:08:04.140 "data_size": 65536 00:08:04.140 }, 00:08:04.140 { 00:08:04.140 "name": "BaseBdev2", 00:08:04.140 "uuid": "4caffd80-2718-4c2e-8d79-9983fcfaad79", 00:08:04.140 "is_configured": true, 00:08:04.140 "data_offset": 0, 00:08:04.140 "data_size": 65536 00:08:04.140 } 00:08:04.140 ] 00:08:04.140 }' 00:08:04.140 10:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.140 10:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.400 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:04.400 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:04.400 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:04.400 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:04.400 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:04.400 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:04.400 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:04.400 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:04.400 10:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.400 10:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.400 [2024-11-15 10:53:11.234680] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:04.400 10:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.400 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:04.400 "name": "Existed_Raid", 00:08:04.400 "aliases": [ 00:08:04.400 "294765da-3032-431f-9579-b0bf3e0fdd12" 00:08:04.400 ], 00:08:04.400 "product_name": "Raid Volume", 00:08:04.400 "block_size": 512, 00:08:04.400 "num_blocks": 65536, 00:08:04.400 "uuid": "294765da-3032-431f-9579-b0bf3e0fdd12", 00:08:04.400 "assigned_rate_limits": { 00:08:04.400 "rw_ios_per_sec": 0, 00:08:04.400 "rw_mbytes_per_sec": 0, 00:08:04.400 "r_mbytes_per_sec": 0, 00:08:04.400 "w_mbytes_per_sec": 0 00:08:04.400 }, 00:08:04.400 "claimed": false, 00:08:04.400 "zoned": false, 00:08:04.400 "supported_io_types": { 00:08:04.400 "read": true, 00:08:04.400 "write": true, 00:08:04.400 "unmap": false, 00:08:04.400 "flush": false, 00:08:04.400 "reset": true, 00:08:04.400 "nvme_admin": false, 00:08:04.400 "nvme_io": false, 00:08:04.400 "nvme_io_md": false, 00:08:04.400 "write_zeroes": true, 00:08:04.400 "zcopy": false, 00:08:04.400 "get_zone_info": false, 00:08:04.400 "zone_management": false, 00:08:04.400 "zone_append": false, 00:08:04.400 "compare": false, 00:08:04.400 "compare_and_write": false, 00:08:04.400 "abort": false, 00:08:04.400 "seek_hole": false, 00:08:04.400 "seek_data": false, 00:08:04.400 "copy": false, 00:08:04.400 "nvme_iov_md": false 00:08:04.400 }, 00:08:04.400 "memory_domains": [ 00:08:04.400 { 00:08:04.400 "dma_device_id": "system", 00:08:04.400 "dma_device_type": 1 00:08:04.400 }, 00:08:04.400 { 00:08:04.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.400 "dma_device_type": 2 00:08:04.400 }, 00:08:04.400 { 00:08:04.400 "dma_device_id": "system", 00:08:04.401 "dma_device_type": 1 00:08:04.401 }, 00:08:04.401 { 00:08:04.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.401 "dma_device_type": 2 00:08:04.401 } 00:08:04.401 ], 00:08:04.401 "driver_specific": { 00:08:04.401 "raid": { 00:08:04.401 "uuid": "294765da-3032-431f-9579-b0bf3e0fdd12", 00:08:04.401 "strip_size_kb": 0, 00:08:04.401 "state": "online", 00:08:04.401 "raid_level": "raid1", 00:08:04.401 "superblock": false, 00:08:04.401 "num_base_bdevs": 2, 00:08:04.401 "num_base_bdevs_discovered": 2, 00:08:04.401 "num_base_bdevs_operational": 2, 00:08:04.401 "base_bdevs_list": [ 00:08:04.401 { 00:08:04.401 "name": "BaseBdev1", 00:08:04.401 "uuid": "5eb7dfd7-69a4-4ee7-8611-9c53d78e4814", 00:08:04.401 "is_configured": true, 00:08:04.401 "data_offset": 0, 00:08:04.401 "data_size": 65536 00:08:04.401 }, 00:08:04.401 { 00:08:04.401 "name": "BaseBdev2", 00:08:04.401 "uuid": "4caffd80-2718-4c2e-8d79-9983fcfaad79", 00:08:04.401 "is_configured": true, 00:08:04.401 "data_offset": 0, 00:08:04.401 "data_size": 65536 00:08:04.401 } 00:08:04.401 ] 00:08:04.401 } 00:08:04.401 } 00:08:04.401 }' 00:08:04.401 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:04.401 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:04.401 BaseBdev2' 00:08:04.401 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.660 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:04.660 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.660 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:04.660 10:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.661 [2024-11-15 10:53:11.462073] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.661 10:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.921 10:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.921 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.921 "name": "Existed_Raid", 00:08:04.921 "uuid": "294765da-3032-431f-9579-b0bf3e0fdd12", 00:08:04.921 "strip_size_kb": 0, 00:08:04.921 "state": "online", 00:08:04.921 "raid_level": "raid1", 00:08:04.921 "superblock": false, 00:08:04.921 "num_base_bdevs": 2, 00:08:04.921 "num_base_bdevs_discovered": 1, 00:08:04.921 "num_base_bdevs_operational": 1, 00:08:04.921 "base_bdevs_list": [ 00:08:04.921 { 00:08:04.921 "name": null, 00:08:04.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.921 "is_configured": false, 00:08:04.921 "data_offset": 0, 00:08:04.921 "data_size": 65536 00:08:04.921 }, 00:08:04.921 { 00:08:04.921 "name": "BaseBdev2", 00:08:04.921 "uuid": "4caffd80-2718-4c2e-8d79-9983fcfaad79", 00:08:04.921 "is_configured": true, 00:08:04.921 "data_offset": 0, 00:08:04.921 "data_size": 65536 00:08:04.921 } 00:08:04.921 ] 00:08:04.921 }' 00:08:04.921 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.921 10:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.179 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:05.179 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:05.179 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.179 10:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:05.179 10:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.179 10:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.179 10:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.179 10:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:05.179 10:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:05.179 10:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:05.179 10:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.179 10:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.179 [2024-11-15 10:53:12.041244] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:05.179 [2024-11-15 10:53:12.041359] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:05.437 [2024-11-15 10:53:12.143958] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:05.437 [2024-11-15 10:53:12.144015] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:05.437 [2024-11-15 10:53:12.144028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:05.437 10:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.437 10:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:05.437 10:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:05.437 10:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.437 10:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.437 10:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.437 10:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:05.437 10:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.437 10:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:05.437 10:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:05.437 10:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:05.437 10:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62843 00:08:05.437 10:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 62843 ']' 00:08:05.437 10:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 62843 00:08:05.437 10:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:08:05.437 10:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:05.437 10:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62843 00:08:05.437 10:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:05.437 10:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:05.437 killing process with pid 62843 00:08:05.437 10:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62843' 00:08:05.437 10:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 62843 00:08:05.437 [2024-11-15 10:53:12.210682] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:05.437 10:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 62843 00:08:05.437 [2024-11-15 10:53:12.227461] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:06.458 10:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:06.458 00:08:06.458 real 0m5.115s 00:08:06.458 user 0m7.383s 00:08:06.458 sys 0m0.853s 00:08:06.458 10:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:06.458 10:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.458 ************************************ 00:08:06.459 END TEST raid_state_function_test 00:08:06.459 ************************************ 00:08:06.718 10:53:13 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:06.719 10:53:13 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:06.719 10:53:13 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:06.719 10:53:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:06.719 ************************************ 00:08:06.719 START TEST raid_state_function_test_sb 00:08:06.719 ************************************ 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63091 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63091' 00:08:06.719 Process raid pid: 63091 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63091 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 63091 ']' 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:06.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:06.719 10:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.719 [2024-11-15 10:53:13.535100] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:08:06.719 [2024-11-15 10:53:13.535255] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.978 [2024-11-15 10:53:13.709571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.978 [2024-11-15 10:53:13.828985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.238 [2024-11-15 10:53:14.036766] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.238 [2024-11-15 10:53:14.036815] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.497 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:07.497 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:08:07.497 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:07.497 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.497 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.497 [2024-11-15 10:53:14.371294] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:07.497 [2024-11-15 10:53:14.371361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:07.497 [2024-11-15 10:53:14.371372] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:07.497 [2024-11-15 10:53:14.371382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:07.497 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.497 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:07.497 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.497 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.497 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:07.497 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:07.497 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.498 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.498 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.498 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.498 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.498 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.498 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.498 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.498 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.498 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.757 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.757 "name": "Existed_Raid", 00:08:07.757 "uuid": "7a333f92-c839-4c72-8084-0225550d02ba", 00:08:07.757 "strip_size_kb": 0, 00:08:07.757 "state": "configuring", 00:08:07.757 "raid_level": "raid1", 00:08:07.757 "superblock": true, 00:08:07.757 "num_base_bdevs": 2, 00:08:07.757 "num_base_bdevs_discovered": 0, 00:08:07.757 "num_base_bdevs_operational": 2, 00:08:07.757 "base_bdevs_list": [ 00:08:07.757 { 00:08:07.757 "name": "BaseBdev1", 00:08:07.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.757 "is_configured": false, 00:08:07.757 "data_offset": 0, 00:08:07.757 "data_size": 0 00:08:07.757 }, 00:08:07.757 { 00:08:07.758 "name": "BaseBdev2", 00:08:07.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.758 "is_configured": false, 00:08:07.758 "data_offset": 0, 00:08:07.758 "data_size": 0 00:08:07.758 } 00:08:07.758 ] 00:08:07.758 }' 00:08:07.758 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.758 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.018 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:08.018 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.018 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.018 [2024-11-15 10:53:14.818485] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:08.018 [2024-11-15 10:53:14.818527] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:08.018 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.018 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:08.018 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.018 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.018 [2024-11-15 10:53:14.830455] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:08.018 [2024-11-15 10:53:14.830496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:08.018 [2024-11-15 10:53:14.830505] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.018 [2024-11-15 10:53:14.830519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.018 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.018 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:08.018 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.018 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.018 [2024-11-15 10:53:14.881970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.018 BaseBdev1 00:08:08.018 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.018 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:08.018 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:08.018 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:08.018 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:08.018 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:08.018 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:08.019 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:08.019 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.019 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.019 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.019 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:08.019 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.019 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.019 [ 00:08:08.019 { 00:08:08.019 "name": "BaseBdev1", 00:08:08.019 "aliases": [ 00:08:08.019 "30674430-3281-4af9-a762-201d5b007a08" 00:08:08.019 ], 00:08:08.019 "product_name": "Malloc disk", 00:08:08.019 "block_size": 512, 00:08:08.019 "num_blocks": 65536, 00:08:08.019 "uuid": "30674430-3281-4af9-a762-201d5b007a08", 00:08:08.019 "assigned_rate_limits": { 00:08:08.019 "rw_ios_per_sec": 0, 00:08:08.019 "rw_mbytes_per_sec": 0, 00:08:08.019 "r_mbytes_per_sec": 0, 00:08:08.019 "w_mbytes_per_sec": 0 00:08:08.019 }, 00:08:08.019 "claimed": true, 00:08:08.019 "claim_type": "exclusive_write", 00:08:08.019 "zoned": false, 00:08:08.019 "supported_io_types": { 00:08:08.019 "read": true, 00:08:08.019 "write": true, 00:08:08.019 "unmap": true, 00:08:08.019 "flush": true, 00:08:08.019 "reset": true, 00:08:08.019 "nvme_admin": false, 00:08:08.019 "nvme_io": false, 00:08:08.019 "nvme_io_md": false, 00:08:08.019 "write_zeroes": true, 00:08:08.019 "zcopy": true, 00:08:08.019 "get_zone_info": false, 00:08:08.019 "zone_management": false, 00:08:08.019 "zone_append": false, 00:08:08.019 "compare": false, 00:08:08.019 "compare_and_write": false, 00:08:08.019 "abort": true, 00:08:08.019 "seek_hole": false, 00:08:08.019 "seek_data": false, 00:08:08.019 "copy": true, 00:08:08.019 "nvme_iov_md": false 00:08:08.019 }, 00:08:08.019 "memory_domains": [ 00:08:08.019 { 00:08:08.019 "dma_device_id": "system", 00:08:08.019 "dma_device_type": 1 00:08:08.019 }, 00:08:08.019 { 00:08:08.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.019 "dma_device_type": 2 00:08:08.019 } 00:08:08.019 ], 00:08:08.019 "driver_specific": {} 00:08:08.019 } 00:08:08.019 ] 00:08:08.019 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.019 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:08.019 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:08.019 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.019 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.019 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.019 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.019 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.019 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.019 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.019 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.019 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.019 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.019 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.019 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.019 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.019 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.299 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.299 "name": "Existed_Raid", 00:08:08.299 "uuid": "27b96d64-39d9-4291-96c9-ecc4d77023e9", 00:08:08.299 "strip_size_kb": 0, 00:08:08.299 "state": "configuring", 00:08:08.299 "raid_level": "raid1", 00:08:08.299 "superblock": true, 00:08:08.299 "num_base_bdevs": 2, 00:08:08.299 "num_base_bdevs_discovered": 1, 00:08:08.299 "num_base_bdevs_operational": 2, 00:08:08.299 "base_bdevs_list": [ 00:08:08.299 { 00:08:08.299 "name": "BaseBdev1", 00:08:08.299 "uuid": "30674430-3281-4af9-a762-201d5b007a08", 00:08:08.299 "is_configured": true, 00:08:08.299 "data_offset": 2048, 00:08:08.299 "data_size": 63488 00:08:08.299 }, 00:08:08.299 { 00:08:08.299 "name": "BaseBdev2", 00:08:08.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.299 "is_configured": false, 00:08:08.299 "data_offset": 0, 00:08:08.299 "data_size": 0 00:08:08.299 } 00:08:08.299 ] 00:08:08.299 }' 00:08:08.299 10:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.299 10:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.593 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:08.593 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.593 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.593 [2024-11-15 10:53:15.377198] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:08.593 [2024-11-15 10:53:15.377262] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:08.593 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.594 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:08.594 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.594 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.594 [2024-11-15 10:53:15.389220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.594 [2024-11-15 10:53:15.391120] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.594 [2024-11-15 10:53:15.391163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.594 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.594 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:08.594 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:08.594 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:08.594 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.594 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.594 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.594 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.594 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.594 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.594 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.594 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.594 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.594 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.594 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.594 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.594 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.594 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.594 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.594 "name": "Existed_Raid", 00:08:08.594 "uuid": "d9093877-1f03-4546-a702-6b9e97cfaf1a", 00:08:08.594 "strip_size_kb": 0, 00:08:08.594 "state": "configuring", 00:08:08.594 "raid_level": "raid1", 00:08:08.594 "superblock": true, 00:08:08.594 "num_base_bdevs": 2, 00:08:08.594 "num_base_bdevs_discovered": 1, 00:08:08.594 "num_base_bdevs_operational": 2, 00:08:08.594 "base_bdevs_list": [ 00:08:08.594 { 00:08:08.594 "name": "BaseBdev1", 00:08:08.594 "uuid": "30674430-3281-4af9-a762-201d5b007a08", 00:08:08.594 "is_configured": true, 00:08:08.594 "data_offset": 2048, 00:08:08.594 "data_size": 63488 00:08:08.594 }, 00:08:08.594 { 00:08:08.594 "name": "BaseBdev2", 00:08:08.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.594 "is_configured": false, 00:08:08.594 "data_offset": 0, 00:08:08.594 "data_size": 0 00:08:08.594 } 00:08:08.594 ] 00:08:08.594 }' 00:08:08.594 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.594 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.161 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:09.161 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.161 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.161 [2024-11-15 10:53:15.871787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:09.161 [2024-11-15 10:53:15.872093] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:09.161 [2024-11-15 10:53:15.872110] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:09.161 [2024-11-15 10:53:15.872431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:09.161 [2024-11-15 10:53:15.872607] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:09.161 [2024-11-15 10:53:15.872622] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:09.161 [2024-11-15 10:53:15.872800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.161 BaseBdev2 00:08:09.161 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.161 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:09.161 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:09.161 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:09.161 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:09.161 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:09.161 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:09.161 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:09.161 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.161 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.161 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.161 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:09.161 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.161 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.161 [ 00:08:09.161 { 00:08:09.161 "name": "BaseBdev2", 00:08:09.161 "aliases": [ 00:08:09.161 "a5173b31-ce77-4e7f-878b-a99cf6ea4346" 00:08:09.161 ], 00:08:09.161 "product_name": "Malloc disk", 00:08:09.161 "block_size": 512, 00:08:09.161 "num_blocks": 65536, 00:08:09.161 "uuid": "a5173b31-ce77-4e7f-878b-a99cf6ea4346", 00:08:09.161 "assigned_rate_limits": { 00:08:09.161 "rw_ios_per_sec": 0, 00:08:09.161 "rw_mbytes_per_sec": 0, 00:08:09.161 "r_mbytes_per_sec": 0, 00:08:09.161 "w_mbytes_per_sec": 0 00:08:09.161 }, 00:08:09.161 "claimed": true, 00:08:09.161 "claim_type": "exclusive_write", 00:08:09.161 "zoned": false, 00:08:09.161 "supported_io_types": { 00:08:09.161 "read": true, 00:08:09.161 "write": true, 00:08:09.161 "unmap": true, 00:08:09.161 "flush": true, 00:08:09.161 "reset": true, 00:08:09.161 "nvme_admin": false, 00:08:09.161 "nvme_io": false, 00:08:09.161 "nvme_io_md": false, 00:08:09.161 "write_zeroes": true, 00:08:09.161 "zcopy": true, 00:08:09.161 "get_zone_info": false, 00:08:09.161 "zone_management": false, 00:08:09.161 "zone_append": false, 00:08:09.161 "compare": false, 00:08:09.161 "compare_and_write": false, 00:08:09.161 "abort": true, 00:08:09.161 "seek_hole": false, 00:08:09.161 "seek_data": false, 00:08:09.161 "copy": true, 00:08:09.161 "nvme_iov_md": false 00:08:09.161 }, 00:08:09.161 "memory_domains": [ 00:08:09.161 { 00:08:09.161 "dma_device_id": "system", 00:08:09.161 "dma_device_type": 1 00:08:09.161 }, 00:08:09.161 { 00:08:09.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.161 "dma_device_type": 2 00:08:09.161 } 00:08:09.161 ], 00:08:09.161 "driver_specific": {} 00:08:09.161 } 00:08:09.161 ] 00:08:09.162 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.162 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:09.162 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:09.162 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:09.162 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:09.162 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.162 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:09.162 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:09.162 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:09.162 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.162 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.162 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.162 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.162 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.162 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.162 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.162 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.162 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.162 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.162 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.162 "name": "Existed_Raid", 00:08:09.162 "uuid": "d9093877-1f03-4546-a702-6b9e97cfaf1a", 00:08:09.162 "strip_size_kb": 0, 00:08:09.162 "state": "online", 00:08:09.162 "raid_level": "raid1", 00:08:09.162 "superblock": true, 00:08:09.162 "num_base_bdevs": 2, 00:08:09.162 "num_base_bdevs_discovered": 2, 00:08:09.162 "num_base_bdevs_operational": 2, 00:08:09.162 "base_bdevs_list": [ 00:08:09.162 { 00:08:09.162 "name": "BaseBdev1", 00:08:09.162 "uuid": "30674430-3281-4af9-a762-201d5b007a08", 00:08:09.162 "is_configured": true, 00:08:09.162 "data_offset": 2048, 00:08:09.162 "data_size": 63488 00:08:09.162 }, 00:08:09.162 { 00:08:09.162 "name": "BaseBdev2", 00:08:09.162 "uuid": "a5173b31-ce77-4e7f-878b-a99cf6ea4346", 00:08:09.162 "is_configured": true, 00:08:09.162 "data_offset": 2048, 00:08:09.162 "data_size": 63488 00:08:09.162 } 00:08:09.162 ] 00:08:09.162 }' 00:08:09.162 10:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.162 10:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.421 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:09.421 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:09.421 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:09.421 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:09.421 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:09.421 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:09.421 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:09.421 10:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.421 10:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.421 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:09.421 [2024-11-15 10:53:16.295470] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.421 10:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.421 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:09.421 "name": "Existed_Raid", 00:08:09.421 "aliases": [ 00:08:09.421 "d9093877-1f03-4546-a702-6b9e97cfaf1a" 00:08:09.421 ], 00:08:09.421 "product_name": "Raid Volume", 00:08:09.421 "block_size": 512, 00:08:09.421 "num_blocks": 63488, 00:08:09.421 "uuid": "d9093877-1f03-4546-a702-6b9e97cfaf1a", 00:08:09.421 "assigned_rate_limits": { 00:08:09.421 "rw_ios_per_sec": 0, 00:08:09.421 "rw_mbytes_per_sec": 0, 00:08:09.421 "r_mbytes_per_sec": 0, 00:08:09.421 "w_mbytes_per_sec": 0 00:08:09.421 }, 00:08:09.421 "claimed": false, 00:08:09.421 "zoned": false, 00:08:09.421 "supported_io_types": { 00:08:09.421 "read": true, 00:08:09.421 "write": true, 00:08:09.421 "unmap": false, 00:08:09.421 "flush": false, 00:08:09.421 "reset": true, 00:08:09.421 "nvme_admin": false, 00:08:09.421 "nvme_io": false, 00:08:09.421 "nvme_io_md": false, 00:08:09.421 "write_zeroes": true, 00:08:09.421 "zcopy": false, 00:08:09.421 "get_zone_info": false, 00:08:09.421 "zone_management": false, 00:08:09.421 "zone_append": false, 00:08:09.421 "compare": false, 00:08:09.421 "compare_and_write": false, 00:08:09.421 "abort": false, 00:08:09.421 "seek_hole": false, 00:08:09.421 "seek_data": false, 00:08:09.421 "copy": false, 00:08:09.421 "nvme_iov_md": false 00:08:09.421 }, 00:08:09.421 "memory_domains": [ 00:08:09.421 { 00:08:09.421 "dma_device_id": "system", 00:08:09.421 "dma_device_type": 1 00:08:09.421 }, 00:08:09.421 { 00:08:09.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.421 "dma_device_type": 2 00:08:09.421 }, 00:08:09.421 { 00:08:09.421 "dma_device_id": "system", 00:08:09.421 "dma_device_type": 1 00:08:09.421 }, 00:08:09.421 { 00:08:09.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.421 "dma_device_type": 2 00:08:09.421 } 00:08:09.421 ], 00:08:09.421 "driver_specific": { 00:08:09.421 "raid": { 00:08:09.421 "uuid": "d9093877-1f03-4546-a702-6b9e97cfaf1a", 00:08:09.421 "strip_size_kb": 0, 00:08:09.421 "state": "online", 00:08:09.421 "raid_level": "raid1", 00:08:09.421 "superblock": true, 00:08:09.421 "num_base_bdevs": 2, 00:08:09.421 "num_base_bdevs_discovered": 2, 00:08:09.421 "num_base_bdevs_operational": 2, 00:08:09.421 "base_bdevs_list": [ 00:08:09.421 { 00:08:09.421 "name": "BaseBdev1", 00:08:09.421 "uuid": "30674430-3281-4af9-a762-201d5b007a08", 00:08:09.421 "is_configured": true, 00:08:09.421 "data_offset": 2048, 00:08:09.421 "data_size": 63488 00:08:09.421 }, 00:08:09.421 { 00:08:09.421 "name": "BaseBdev2", 00:08:09.422 "uuid": "a5173b31-ce77-4e7f-878b-a99cf6ea4346", 00:08:09.422 "is_configured": true, 00:08:09.422 "data_offset": 2048, 00:08:09.422 "data_size": 63488 00:08:09.422 } 00:08:09.422 ] 00:08:09.422 } 00:08:09.422 } 00:08:09.422 }' 00:08:09.422 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:09.681 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:09.681 BaseBdev2' 00:08:09.681 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.681 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:09.681 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.681 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:09.681 10:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.681 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.681 10:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.681 10:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.681 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.681 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.681 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.681 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:09.681 10:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.681 10:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.681 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.681 10:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.681 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.681 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.681 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:09.681 10:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.681 10:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.681 [2024-11-15 10:53:16.542789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:09.940 10:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.940 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:09.940 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:09.940 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:09.940 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:09.940 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:09.940 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:09.940 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.940 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:09.940 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:09.940 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:09.940 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:09.940 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.940 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.940 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.940 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.940 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.940 10:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.940 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.941 10:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.941 10:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.941 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.941 "name": "Existed_Raid", 00:08:09.941 "uuid": "d9093877-1f03-4546-a702-6b9e97cfaf1a", 00:08:09.941 "strip_size_kb": 0, 00:08:09.941 "state": "online", 00:08:09.941 "raid_level": "raid1", 00:08:09.941 "superblock": true, 00:08:09.941 "num_base_bdevs": 2, 00:08:09.941 "num_base_bdevs_discovered": 1, 00:08:09.941 "num_base_bdevs_operational": 1, 00:08:09.941 "base_bdevs_list": [ 00:08:09.941 { 00:08:09.941 "name": null, 00:08:09.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.941 "is_configured": false, 00:08:09.941 "data_offset": 0, 00:08:09.941 "data_size": 63488 00:08:09.941 }, 00:08:09.941 { 00:08:09.941 "name": "BaseBdev2", 00:08:09.941 "uuid": "a5173b31-ce77-4e7f-878b-a99cf6ea4346", 00:08:09.941 "is_configured": true, 00:08:09.941 "data_offset": 2048, 00:08:09.941 "data_size": 63488 00:08:09.941 } 00:08:09.941 ] 00:08:09.941 }' 00:08:09.941 10:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.941 10:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.200 10:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:10.200 10:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:10.200 10:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:10.200 10:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.200 10:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.200 10:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.200 10:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.200 10:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:10.200 10:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:10.200 10:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:10.200 10:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.200 10:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.200 [2024-11-15 10:53:17.105197] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:10.200 [2024-11-15 10:53:17.105317] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:10.460 [2024-11-15 10:53:17.202165] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:10.460 [2024-11-15 10:53:17.202225] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:10.460 [2024-11-15 10:53:17.202237] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:10.460 10:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.460 10:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:10.460 10:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:10.460 10:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.460 10:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.460 10:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.460 10:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:10.460 10:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.460 10:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:10.460 10:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:10.460 10:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:10.460 10:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63091 00:08:10.460 10:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 63091 ']' 00:08:10.460 10:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 63091 00:08:10.460 10:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:08:10.460 10:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:10.460 10:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63091 00:08:10.460 10:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:10.460 10:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:10.460 killing process with pid 63091 00:08:10.460 10:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63091' 00:08:10.460 10:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 63091 00:08:10.460 [2024-11-15 10:53:17.284267] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:10.460 10:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 63091 00:08:10.460 [2024-11-15 10:53:17.301290] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:11.836 10:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:11.836 00:08:11.836 real 0m5.020s 00:08:11.836 user 0m7.202s 00:08:11.836 sys 0m0.808s 00:08:11.836 10:53:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:11.836 ************************************ 00:08:11.836 END TEST raid_state_function_test_sb 00:08:11.836 10:53:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.836 ************************************ 00:08:11.836 10:53:18 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:11.836 10:53:18 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:11.836 10:53:18 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:11.836 10:53:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:11.836 ************************************ 00:08:11.836 START TEST raid_superblock_test 00:08:11.836 ************************************ 00:08:11.836 10:53:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:08:11.836 10:53:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:11.836 10:53:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:11.836 10:53:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:11.836 10:53:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:11.836 10:53:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:11.836 10:53:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:11.836 10:53:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:11.836 10:53:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:11.836 10:53:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:11.836 10:53:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:11.836 10:53:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:11.836 10:53:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:11.836 10:53:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:11.836 10:53:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:11.836 10:53:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:11.836 10:53:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63343 00:08:11.836 10:53:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:11.836 10:53:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63343 00:08:11.836 10:53:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 63343 ']' 00:08:11.836 10:53:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.836 10:53:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:11.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.836 10:53:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.836 10:53:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:11.836 10:53:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.836 [2024-11-15 10:53:18.606966] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:08:11.836 [2024-11-15 10:53:18.607091] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63343 ] 00:08:12.094 [2024-11-15 10:53:18.787845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.094 [2024-11-15 10:53:18.906533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.352 [2024-11-15 10:53:19.115425] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.352 [2024-11-15 10:53:19.115473] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.617 10:53:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:12.617 10:53:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:08:12.617 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:12.617 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:12.617 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:12.617 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:12.617 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:12.617 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:12.617 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:12.617 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:12.617 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:12.617 10:53:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.617 10:53:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.617 malloc1 00:08:12.617 10:53:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.617 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:12.617 10:53:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.617 10:53:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.617 [2024-11-15 10:53:19.506228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:12.617 [2024-11-15 10:53:19.506296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.617 [2024-11-15 10:53:19.506332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:12.617 [2024-11-15 10:53:19.506342] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.617 [2024-11-15 10:53:19.508448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.617 [2024-11-15 10:53:19.508481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:12.617 pt1 00:08:12.617 10:53:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.617 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:12.617 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:12.617 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:12.618 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:12.618 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:12.618 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:12.618 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:12.618 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:12.618 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:12.618 10:53:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.618 10:53:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.889 malloc2 00:08:12.889 10:53:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.889 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:12.889 10:53:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.889 10:53:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.889 [2024-11-15 10:53:19.560018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:12.889 [2024-11-15 10:53:19.560074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.889 [2024-11-15 10:53:19.560095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:12.889 [2024-11-15 10:53:19.560104] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.889 [2024-11-15 10:53:19.562219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.889 [2024-11-15 10:53:19.562265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:12.889 pt2 00:08:12.889 10:53:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.889 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:12.889 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:12.889 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:12.889 10:53:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.889 10:53:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.889 [2024-11-15 10:53:19.572049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:12.889 [2024-11-15 10:53:19.573856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:12.889 [2024-11-15 10:53:19.574016] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:12.889 [2024-11-15 10:53:19.574040] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:12.889 [2024-11-15 10:53:19.574301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:12.889 [2024-11-15 10:53:19.574483] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:12.889 [2024-11-15 10:53:19.574505] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:12.889 [2024-11-15 10:53:19.574656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.889 10:53:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.889 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:12.889 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:12.889 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.889 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:12.889 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:12.889 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.889 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.889 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.889 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.889 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.889 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.889 10:53:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.889 10:53:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.889 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:12.889 10:53:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.889 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.889 "name": "raid_bdev1", 00:08:12.889 "uuid": "5500e695-1450-4525-a763-26298aecc125", 00:08:12.889 "strip_size_kb": 0, 00:08:12.889 "state": "online", 00:08:12.889 "raid_level": "raid1", 00:08:12.889 "superblock": true, 00:08:12.889 "num_base_bdevs": 2, 00:08:12.889 "num_base_bdevs_discovered": 2, 00:08:12.889 "num_base_bdevs_operational": 2, 00:08:12.889 "base_bdevs_list": [ 00:08:12.889 { 00:08:12.889 "name": "pt1", 00:08:12.889 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:12.890 "is_configured": true, 00:08:12.890 "data_offset": 2048, 00:08:12.890 "data_size": 63488 00:08:12.890 }, 00:08:12.890 { 00:08:12.890 "name": "pt2", 00:08:12.890 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:12.890 "is_configured": true, 00:08:12.890 "data_offset": 2048, 00:08:12.890 "data_size": 63488 00:08:12.890 } 00:08:12.890 ] 00:08:12.890 }' 00:08:12.890 10:53:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.890 10:53:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.148 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:13.148 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:13.148 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:13.148 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:13.148 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:13.148 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:13.148 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:13.148 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:13.148 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.148 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.148 [2024-11-15 10:53:20.027624] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.148 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.148 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:13.148 "name": "raid_bdev1", 00:08:13.148 "aliases": [ 00:08:13.148 "5500e695-1450-4525-a763-26298aecc125" 00:08:13.148 ], 00:08:13.148 "product_name": "Raid Volume", 00:08:13.148 "block_size": 512, 00:08:13.148 "num_blocks": 63488, 00:08:13.148 "uuid": "5500e695-1450-4525-a763-26298aecc125", 00:08:13.148 "assigned_rate_limits": { 00:08:13.148 "rw_ios_per_sec": 0, 00:08:13.148 "rw_mbytes_per_sec": 0, 00:08:13.148 "r_mbytes_per_sec": 0, 00:08:13.148 "w_mbytes_per_sec": 0 00:08:13.148 }, 00:08:13.148 "claimed": false, 00:08:13.148 "zoned": false, 00:08:13.148 "supported_io_types": { 00:08:13.148 "read": true, 00:08:13.148 "write": true, 00:08:13.148 "unmap": false, 00:08:13.148 "flush": false, 00:08:13.148 "reset": true, 00:08:13.148 "nvme_admin": false, 00:08:13.148 "nvme_io": false, 00:08:13.148 "nvme_io_md": false, 00:08:13.148 "write_zeroes": true, 00:08:13.148 "zcopy": false, 00:08:13.148 "get_zone_info": false, 00:08:13.148 "zone_management": false, 00:08:13.148 "zone_append": false, 00:08:13.148 "compare": false, 00:08:13.148 "compare_and_write": false, 00:08:13.148 "abort": false, 00:08:13.148 "seek_hole": false, 00:08:13.148 "seek_data": false, 00:08:13.148 "copy": false, 00:08:13.148 "nvme_iov_md": false 00:08:13.148 }, 00:08:13.148 "memory_domains": [ 00:08:13.148 { 00:08:13.148 "dma_device_id": "system", 00:08:13.148 "dma_device_type": 1 00:08:13.148 }, 00:08:13.148 { 00:08:13.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.148 "dma_device_type": 2 00:08:13.148 }, 00:08:13.148 { 00:08:13.148 "dma_device_id": "system", 00:08:13.148 "dma_device_type": 1 00:08:13.148 }, 00:08:13.148 { 00:08:13.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.148 "dma_device_type": 2 00:08:13.148 } 00:08:13.148 ], 00:08:13.148 "driver_specific": { 00:08:13.148 "raid": { 00:08:13.148 "uuid": "5500e695-1450-4525-a763-26298aecc125", 00:08:13.148 "strip_size_kb": 0, 00:08:13.148 "state": "online", 00:08:13.148 "raid_level": "raid1", 00:08:13.148 "superblock": true, 00:08:13.148 "num_base_bdevs": 2, 00:08:13.149 "num_base_bdevs_discovered": 2, 00:08:13.149 "num_base_bdevs_operational": 2, 00:08:13.149 "base_bdevs_list": [ 00:08:13.149 { 00:08:13.149 "name": "pt1", 00:08:13.149 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:13.149 "is_configured": true, 00:08:13.149 "data_offset": 2048, 00:08:13.149 "data_size": 63488 00:08:13.149 }, 00:08:13.149 { 00:08:13.149 "name": "pt2", 00:08:13.149 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.149 "is_configured": true, 00:08:13.149 "data_offset": 2048, 00:08:13.149 "data_size": 63488 00:08:13.149 } 00:08:13.149 ] 00:08:13.149 } 00:08:13.149 } 00:08:13.149 }' 00:08:13.149 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:13.407 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:13.407 pt2' 00:08:13.407 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.407 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:13.407 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.407 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.407 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:13.407 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.407 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.407 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.407 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.407 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.407 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.407 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:13.407 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.407 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.407 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.407 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.407 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.407 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.408 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:13.408 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:13.408 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.408 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.408 [2024-11-15 10:53:20.239176] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.408 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.408 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5500e695-1450-4525-a763-26298aecc125 00:08:13.408 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5500e695-1450-4525-a763-26298aecc125 ']' 00:08:13.408 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:13.408 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.408 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.408 [2024-11-15 10:53:20.282814] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:13.408 [2024-11-15 10:53:20.282845] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.408 [2024-11-15 10:53:20.282941] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.408 [2024-11-15 10:53:20.283014] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:13.408 [2024-11-15 10:53:20.283032] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:13.408 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.408 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.408 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.408 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.408 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:13.408 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.667 [2024-11-15 10:53:20.418627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:13.667 [2024-11-15 10:53:20.420785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:13.667 [2024-11-15 10:53:20.420870] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:13.667 [2024-11-15 10:53:20.420926] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:13.667 [2024-11-15 10:53:20.420942] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:13.667 [2024-11-15 10:53:20.420954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:13.667 request: 00:08:13.667 { 00:08:13.667 "name": "raid_bdev1", 00:08:13.667 "raid_level": "raid1", 00:08:13.667 "base_bdevs": [ 00:08:13.667 "malloc1", 00:08:13.667 "malloc2" 00:08:13.667 ], 00:08:13.667 "superblock": false, 00:08:13.667 "method": "bdev_raid_create", 00:08:13.667 "req_id": 1 00:08:13.667 } 00:08:13.667 Got JSON-RPC error response 00:08:13.667 response: 00:08:13.667 { 00:08:13.667 "code": -17, 00:08:13.667 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:13.667 } 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.667 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.667 [2024-11-15 10:53:20.494459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:13.667 [2024-11-15 10:53:20.494529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.667 [2024-11-15 10:53:20.494547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:13.668 [2024-11-15 10:53:20.494557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.668 [2024-11-15 10:53:20.496838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.668 [2024-11-15 10:53:20.496876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:13.668 [2024-11-15 10:53:20.496966] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:13.668 [2024-11-15 10:53:20.497032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:13.668 pt1 00:08:13.668 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.668 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:13.668 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.668 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.668 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:13.668 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:13.668 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.668 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.668 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.668 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.668 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.668 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.668 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.668 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.668 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.668 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.668 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.668 "name": "raid_bdev1", 00:08:13.668 "uuid": "5500e695-1450-4525-a763-26298aecc125", 00:08:13.668 "strip_size_kb": 0, 00:08:13.668 "state": "configuring", 00:08:13.668 "raid_level": "raid1", 00:08:13.668 "superblock": true, 00:08:13.668 "num_base_bdevs": 2, 00:08:13.668 "num_base_bdevs_discovered": 1, 00:08:13.668 "num_base_bdevs_operational": 2, 00:08:13.668 "base_bdevs_list": [ 00:08:13.668 { 00:08:13.668 "name": "pt1", 00:08:13.668 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:13.668 "is_configured": true, 00:08:13.668 "data_offset": 2048, 00:08:13.668 "data_size": 63488 00:08:13.668 }, 00:08:13.668 { 00:08:13.668 "name": null, 00:08:13.668 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.668 "is_configured": false, 00:08:13.668 "data_offset": 2048, 00:08:13.668 "data_size": 63488 00:08:13.668 } 00:08:13.668 ] 00:08:13.668 }' 00:08:13.668 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.668 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.235 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:14.235 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:14.235 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:14.235 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:14.235 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.235 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.235 [2024-11-15 10:53:20.881817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:14.235 [2024-11-15 10:53:20.881893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.235 [2024-11-15 10:53:20.881916] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:14.235 [2024-11-15 10:53:20.881928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.235 [2024-11-15 10:53:20.882467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.235 [2024-11-15 10:53:20.882498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:14.235 [2024-11-15 10:53:20.882594] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:14.236 [2024-11-15 10:53:20.882626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:14.236 [2024-11-15 10:53:20.882758] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:14.236 [2024-11-15 10:53:20.882774] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:14.236 [2024-11-15 10:53:20.883030] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:14.236 [2024-11-15 10:53:20.883215] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:14.236 [2024-11-15 10:53:20.883233] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:14.236 [2024-11-15 10:53:20.883428] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.236 pt2 00:08:14.236 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.236 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:14.236 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:14.236 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:14.236 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.236 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.236 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:14.236 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:14.236 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:14.236 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.236 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.236 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.236 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.236 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.236 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.236 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.236 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.236 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.236 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.236 "name": "raid_bdev1", 00:08:14.236 "uuid": "5500e695-1450-4525-a763-26298aecc125", 00:08:14.236 "strip_size_kb": 0, 00:08:14.236 "state": "online", 00:08:14.236 "raid_level": "raid1", 00:08:14.236 "superblock": true, 00:08:14.236 "num_base_bdevs": 2, 00:08:14.236 "num_base_bdevs_discovered": 2, 00:08:14.236 "num_base_bdevs_operational": 2, 00:08:14.236 "base_bdevs_list": [ 00:08:14.236 { 00:08:14.236 "name": "pt1", 00:08:14.236 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.236 "is_configured": true, 00:08:14.236 "data_offset": 2048, 00:08:14.236 "data_size": 63488 00:08:14.236 }, 00:08:14.236 { 00:08:14.236 "name": "pt2", 00:08:14.236 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.236 "is_configured": true, 00:08:14.236 "data_offset": 2048, 00:08:14.236 "data_size": 63488 00:08:14.236 } 00:08:14.236 ] 00:08:14.236 }' 00:08:14.236 10:53:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.236 10:53:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.495 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:14.495 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:14.495 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:14.495 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:14.495 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:14.495 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:14.495 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:14.495 10:53:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.495 10:53:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.495 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:14.495 [2024-11-15 10:53:21.329331] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:14.495 10:53:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.495 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:14.495 "name": "raid_bdev1", 00:08:14.495 "aliases": [ 00:08:14.495 "5500e695-1450-4525-a763-26298aecc125" 00:08:14.495 ], 00:08:14.495 "product_name": "Raid Volume", 00:08:14.495 "block_size": 512, 00:08:14.495 "num_blocks": 63488, 00:08:14.495 "uuid": "5500e695-1450-4525-a763-26298aecc125", 00:08:14.495 "assigned_rate_limits": { 00:08:14.495 "rw_ios_per_sec": 0, 00:08:14.495 "rw_mbytes_per_sec": 0, 00:08:14.495 "r_mbytes_per_sec": 0, 00:08:14.495 "w_mbytes_per_sec": 0 00:08:14.495 }, 00:08:14.495 "claimed": false, 00:08:14.495 "zoned": false, 00:08:14.495 "supported_io_types": { 00:08:14.495 "read": true, 00:08:14.495 "write": true, 00:08:14.495 "unmap": false, 00:08:14.495 "flush": false, 00:08:14.495 "reset": true, 00:08:14.495 "nvme_admin": false, 00:08:14.495 "nvme_io": false, 00:08:14.495 "nvme_io_md": false, 00:08:14.495 "write_zeroes": true, 00:08:14.495 "zcopy": false, 00:08:14.495 "get_zone_info": false, 00:08:14.495 "zone_management": false, 00:08:14.495 "zone_append": false, 00:08:14.495 "compare": false, 00:08:14.495 "compare_and_write": false, 00:08:14.495 "abort": false, 00:08:14.495 "seek_hole": false, 00:08:14.495 "seek_data": false, 00:08:14.495 "copy": false, 00:08:14.495 "nvme_iov_md": false 00:08:14.495 }, 00:08:14.495 "memory_domains": [ 00:08:14.495 { 00:08:14.495 "dma_device_id": "system", 00:08:14.495 "dma_device_type": 1 00:08:14.495 }, 00:08:14.495 { 00:08:14.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.495 "dma_device_type": 2 00:08:14.495 }, 00:08:14.495 { 00:08:14.495 "dma_device_id": "system", 00:08:14.495 "dma_device_type": 1 00:08:14.495 }, 00:08:14.495 { 00:08:14.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.495 "dma_device_type": 2 00:08:14.495 } 00:08:14.495 ], 00:08:14.495 "driver_specific": { 00:08:14.495 "raid": { 00:08:14.495 "uuid": "5500e695-1450-4525-a763-26298aecc125", 00:08:14.495 "strip_size_kb": 0, 00:08:14.495 "state": "online", 00:08:14.495 "raid_level": "raid1", 00:08:14.495 "superblock": true, 00:08:14.495 "num_base_bdevs": 2, 00:08:14.495 "num_base_bdevs_discovered": 2, 00:08:14.495 "num_base_bdevs_operational": 2, 00:08:14.495 "base_bdevs_list": [ 00:08:14.495 { 00:08:14.495 "name": "pt1", 00:08:14.495 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.495 "is_configured": true, 00:08:14.495 "data_offset": 2048, 00:08:14.495 "data_size": 63488 00:08:14.495 }, 00:08:14.495 { 00:08:14.495 "name": "pt2", 00:08:14.495 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.495 "is_configured": true, 00:08:14.495 "data_offset": 2048, 00:08:14.495 "data_size": 63488 00:08:14.495 } 00:08:14.495 ] 00:08:14.495 } 00:08:14.495 } 00:08:14.495 }' 00:08:14.495 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:14.495 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:14.495 pt2' 00:08:14.495 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.754 [2024-11-15 10:53:21.564934] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5500e695-1450-4525-a763-26298aecc125 '!=' 5500e695-1450-4525-a763-26298aecc125 ']' 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.754 [2024-11-15 10:53:21.608621] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.754 10:53:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.755 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.755 "name": "raid_bdev1", 00:08:14.755 "uuid": "5500e695-1450-4525-a763-26298aecc125", 00:08:14.755 "strip_size_kb": 0, 00:08:14.755 "state": "online", 00:08:14.755 "raid_level": "raid1", 00:08:14.755 "superblock": true, 00:08:14.755 "num_base_bdevs": 2, 00:08:14.755 "num_base_bdevs_discovered": 1, 00:08:14.755 "num_base_bdevs_operational": 1, 00:08:14.755 "base_bdevs_list": [ 00:08:14.755 { 00:08:14.755 "name": null, 00:08:14.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.755 "is_configured": false, 00:08:14.755 "data_offset": 0, 00:08:14.755 "data_size": 63488 00:08:14.755 }, 00:08:14.755 { 00:08:14.755 "name": "pt2", 00:08:14.755 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.755 "is_configured": true, 00:08:14.755 "data_offset": 2048, 00:08:14.755 "data_size": 63488 00:08:14.755 } 00:08:14.755 ] 00:08:14.755 }' 00:08:14.755 10:53:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.755 10:53:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.322 [2024-11-15 10:53:22.079805] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:15.322 [2024-11-15 10:53:22.079836] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:15.322 [2024-11-15 10:53:22.079928] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:15.322 [2024-11-15 10:53:22.079979] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:15.322 [2024-11-15 10:53:22.080003] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.322 [2024-11-15 10:53:22.151669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:15.322 [2024-11-15 10:53:22.151735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.322 [2024-11-15 10:53:22.151754] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:15.322 [2024-11-15 10:53:22.151765] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.322 [2024-11-15 10:53:22.154261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.322 [2024-11-15 10:53:22.154313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:15.322 [2024-11-15 10:53:22.154404] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:15.322 [2024-11-15 10:53:22.154460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:15.322 [2024-11-15 10:53:22.154576] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:15.322 [2024-11-15 10:53:22.154591] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:15.322 [2024-11-15 10:53:22.154824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:15.322 [2024-11-15 10:53:22.154983] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:15.322 [2024-11-15 10:53:22.154998] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:15.322 [2024-11-15 10:53:22.155153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.322 pt2 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.322 10:53:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.323 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.323 "name": "raid_bdev1", 00:08:15.323 "uuid": "5500e695-1450-4525-a763-26298aecc125", 00:08:15.323 "strip_size_kb": 0, 00:08:15.323 "state": "online", 00:08:15.323 "raid_level": "raid1", 00:08:15.323 "superblock": true, 00:08:15.323 "num_base_bdevs": 2, 00:08:15.323 "num_base_bdevs_discovered": 1, 00:08:15.323 "num_base_bdevs_operational": 1, 00:08:15.323 "base_bdevs_list": [ 00:08:15.323 { 00:08:15.323 "name": null, 00:08:15.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.323 "is_configured": false, 00:08:15.323 "data_offset": 2048, 00:08:15.323 "data_size": 63488 00:08:15.323 }, 00:08:15.323 { 00:08:15.323 "name": "pt2", 00:08:15.323 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.323 "is_configured": true, 00:08:15.323 "data_offset": 2048, 00:08:15.323 "data_size": 63488 00:08:15.323 } 00:08:15.323 ] 00:08:15.323 }' 00:08:15.323 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.323 10:53:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.890 [2024-11-15 10:53:22.630854] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:15.890 [2024-11-15 10:53:22.630892] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:15.890 [2024-11-15 10:53:22.630976] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:15.890 [2024-11-15 10:53:22.631027] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:15.890 [2024-11-15 10:53:22.631037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.890 [2024-11-15 10:53:22.694740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:15.890 [2024-11-15 10:53:22.694795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.890 [2024-11-15 10:53:22.694814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:15.890 [2024-11-15 10:53:22.694824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.890 [2024-11-15 10:53:22.697178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.890 [2024-11-15 10:53:22.697211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:15.890 [2024-11-15 10:53:22.697319] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:15.890 [2024-11-15 10:53:22.697370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:15.890 [2024-11-15 10:53:22.697554] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:15.890 [2024-11-15 10:53:22.697569] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:15.890 [2024-11-15 10:53:22.697588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:15.890 [2024-11-15 10:53:22.697648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:15.890 [2024-11-15 10:53:22.697733] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:15.890 [2024-11-15 10:53:22.697742] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:15.890 [2024-11-15 10:53:22.698013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:15.890 [2024-11-15 10:53:22.698178] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:15.890 [2024-11-15 10:53:22.698199] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:15.890 [2024-11-15 10:53:22.698379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.890 pt1 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.890 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.890 "name": "raid_bdev1", 00:08:15.890 "uuid": "5500e695-1450-4525-a763-26298aecc125", 00:08:15.890 "strip_size_kb": 0, 00:08:15.890 "state": "online", 00:08:15.891 "raid_level": "raid1", 00:08:15.891 "superblock": true, 00:08:15.891 "num_base_bdevs": 2, 00:08:15.891 "num_base_bdevs_discovered": 1, 00:08:15.891 "num_base_bdevs_operational": 1, 00:08:15.891 "base_bdevs_list": [ 00:08:15.891 { 00:08:15.891 "name": null, 00:08:15.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.891 "is_configured": false, 00:08:15.891 "data_offset": 2048, 00:08:15.891 "data_size": 63488 00:08:15.891 }, 00:08:15.891 { 00:08:15.891 "name": "pt2", 00:08:15.891 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.891 "is_configured": true, 00:08:15.891 "data_offset": 2048, 00:08:15.891 "data_size": 63488 00:08:15.891 } 00:08:15.891 ] 00:08:15.891 }' 00:08:15.891 10:53:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.891 10:53:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.458 10:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:16.458 10:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:16.458 10:53:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.458 10:53:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.458 10:53:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.458 10:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:16.458 10:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:16.458 10:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:16.458 10:53:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.458 10:53:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.458 [2024-11-15 10:53:23.198171] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:16.458 10:53:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.458 10:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 5500e695-1450-4525-a763-26298aecc125 '!=' 5500e695-1450-4525-a763-26298aecc125 ']' 00:08:16.458 10:53:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63343 00:08:16.458 10:53:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 63343 ']' 00:08:16.458 10:53:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 63343 00:08:16.458 10:53:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:08:16.458 10:53:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:16.458 10:53:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63343 00:08:16.458 10:53:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:16.458 10:53:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:16.458 killing process with pid 63343 00:08:16.458 10:53:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63343' 00:08:16.458 10:53:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 63343 00:08:16.458 [2024-11-15 10:53:23.261381] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:16.458 [2024-11-15 10:53:23.261482] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.458 [2024-11-15 10:53:23.261537] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:16.458 [2024-11-15 10:53:23.261552] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:16.458 10:53:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 63343 00:08:16.717 [2024-11-15 10:53:23.472521] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:18.119 10:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:18.119 00:08:18.119 real 0m6.114s 00:08:18.119 user 0m9.253s 00:08:18.119 sys 0m1.061s 00:08:18.119 10:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:18.119 10:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.119 ************************************ 00:08:18.119 END TEST raid_superblock_test 00:08:18.119 ************************************ 00:08:18.119 10:53:24 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:18.119 10:53:24 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:18.119 10:53:24 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:18.119 10:53:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:18.119 ************************************ 00:08:18.119 START TEST raid_read_error_test 00:08:18.119 ************************************ 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 read 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1KV8Q4HskG 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63673 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63673 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 63673 ']' 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:18.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:18.119 10:53:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.119 [2024-11-15 10:53:24.805786] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:08:18.119 [2024-11-15 10:53:24.805910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63673 ] 00:08:18.119 [2024-11-15 10:53:24.981060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.378 [2024-11-15 10:53:25.098413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.637 [2024-11-15 10:53:25.310608] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.637 [2024-11-15 10:53:25.310644] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.896 BaseBdev1_malloc 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.896 true 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.896 [2024-11-15 10:53:25.726761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:18.896 [2024-11-15 10:53:25.726812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.896 [2024-11-15 10:53:25.726832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:18.896 [2024-11-15 10:53:25.726843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.896 [2024-11-15 10:53:25.729156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.896 [2024-11-15 10:53:25.729195] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:18.896 BaseBdev1 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.896 BaseBdev2_malloc 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.896 true 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.896 [2024-11-15 10:53:25.793956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:18.896 [2024-11-15 10:53:25.794013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.896 [2024-11-15 10:53:25.794029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:18.896 [2024-11-15 10:53:25.794040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.896 [2024-11-15 10:53:25.796239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.896 [2024-11-15 10:53:25.796283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:18.896 BaseBdev2 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.896 [2024-11-15 10:53:25.806021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:18.896 [2024-11-15 10:53:25.807884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:18.896 [2024-11-15 10:53:25.808188] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:18.896 [2024-11-15 10:53:25.808209] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:18.896 [2024-11-15 10:53:25.808469] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:18.896 [2024-11-15 10:53:25.808671] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:18.896 [2024-11-15 10:53:25.808683] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:18.896 [2024-11-15 10:53:25.808854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.896 10:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.156 10:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.156 10:53:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.156 "name": "raid_bdev1", 00:08:19.156 "uuid": "3935db85-75fd-4c27-964c-3840fa42eeb7", 00:08:19.156 "strip_size_kb": 0, 00:08:19.156 "state": "online", 00:08:19.156 "raid_level": "raid1", 00:08:19.156 "superblock": true, 00:08:19.156 "num_base_bdevs": 2, 00:08:19.156 "num_base_bdevs_discovered": 2, 00:08:19.156 "num_base_bdevs_operational": 2, 00:08:19.156 "base_bdevs_list": [ 00:08:19.156 { 00:08:19.156 "name": "BaseBdev1", 00:08:19.156 "uuid": "830c9f34-7859-55f3-9790-1cf0b7dd40f1", 00:08:19.156 "is_configured": true, 00:08:19.156 "data_offset": 2048, 00:08:19.156 "data_size": 63488 00:08:19.156 }, 00:08:19.156 { 00:08:19.156 "name": "BaseBdev2", 00:08:19.156 "uuid": "429fdf29-7200-549c-94af-d3122ba4e6f1", 00:08:19.156 "is_configured": true, 00:08:19.156 "data_offset": 2048, 00:08:19.156 "data_size": 63488 00:08:19.156 } 00:08:19.156 ] 00:08:19.156 }' 00:08:19.156 10:53:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.156 10:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.414 10:53:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:19.414 10:53:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:19.673 [2024-11-15 10:53:26.346504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:20.607 10:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:20.607 10:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.607 10:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.607 10:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.607 10:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:20.607 10:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:20.607 10:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:20.607 10:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:20.607 10:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:20.608 10:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.608 10:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.608 10:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:20.608 10:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:20.608 10:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:20.608 10:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.608 10:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.608 10:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.608 10:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.608 10:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.608 10:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.608 10:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.608 10:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.608 10:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.608 10:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.608 "name": "raid_bdev1", 00:08:20.608 "uuid": "3935db85-75fd-4c27-964c-3840fa42eeb7", 00:08:20.608 "strip_size_kb": 0, 00:08:20.608 "state": "online", 00:08:20.608 "raid_level": "raid1", 00:08:20.608 "superblock": true, 00:08:20.608 "num_base_bdevs": 2, 00:08:20.608 "num_base_bdevs_discovered": 2, 00:08:20.608 "num_base_bdevs_operational": 2, 00:08:20.608 "base_bdevs_list": [ 00:08:20.608 { 00:08:20.608 "name": "BaseBdev1", 00:08:20.608 "uuid": "830c9f34-7859-55f3-9790-1cf0b7dd40f1", 00:08:20.608 "is_configured": true, 00:08:20.608 "data_offset": 2048, 00:08:20.608 "data_size": 63488 00:08:20.608 }, 00:08:20.608 { 00:08:20.608 "name": "BaseBdev2", 00:08:20.608 "uuid": "429fdf29-7200-549c-94af-d3122ba4e6f1", 00:08:20.608 "is_configured": true, 00:08:20.608 "data_offset": 2048, 00:08:20.608 "data_size": 63488 00:08:20.608 } 00:08:20.608 ] 00:08:20.608 }' 00:08:20.608 10:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.608 10:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.866 10:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:20.866 10:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.866 10:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.866 [2024-11-15 10:53:27.719095] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:20.866 [2024-11-15 10:53:27.719208] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:20.866 [2024-11-15 10:53:27.721926] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:20.866 [2024-11-15 10:53:27.722012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.866 [2024-11-15 10:53:27.722125] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:20.866 [2024-11-15 10:53:27.722175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:20.866 { 00:08:20.866 "results": [ 00:08:20.866 { 00:08:20.866 "job": "raid_bdev1", 00:08:20.866 "core_mask": "0x1", 00:08:20.866 "workload": "randrw", 00:08:20.866 "percentage": 50, 00:08:20.866 "status": "finished", 00:08:20.866 "queue_depth": 1, 00:08:20.866 "io_size": 131072, 00:08:20.866 "runtime": 1.373559, 00:08:20.866 "iops": 16971.9684411081, 00:08:20.866 "mibps": 2121.4960551385125, 00:08:20.866 "io_failed": 0, 00:08:20.866 "io_timeout": 0, 00:08:20.866 "avg_latency_us": 56.16161221388688, 00:08:20.866 "min_latency_us": 22.805240174672488, 00:08:20.866 "max_latency_us": 1695.6366812227075 00:08:20.866 } 00:08:20.866 ], 00:08:20.866 "core_count": 1 00:08:20.866 } 00:08:20.866 10:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.866 10:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63673 00:08:20.866 10:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 63673 ']' 00:08:20.866 10:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 63673 00:08:20.867 10:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:08:20.867 10:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:20.867 10:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63673 00:08:20.867 killing process with pid 63673 00:08:20.867 10:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:20.867 10:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:20.867 10:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63673' 00:08:20.867 10:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 63673 00:08:20.867 [2024-11-15 10:53:27.758471] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:20.867 10:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 63673 00:08:21.125 [2024-11-15 10:53:27.903827] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:22.560 10:53:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1KV8Q4HskG 00:08:22.560 10:53:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:22.560 10:53:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:22.560 10:53:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:22.560 10:53:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:22.560 10:53:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:22.560 10:53:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:22.560 ************************************ 00:08:22.560 END TEST raid_read_error_test 00:08:22.560 ************************************ 00:08:22.560 10:53:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:22.560 00:08:22.560 real 0m4.438s 00:08:22.560 user 0m5.306s 00:08:22.560 sys 0m0.542s 00:08:22.560 10:53:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:22.560 10:53:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.560 10:53:29 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:22.560 10:53:29 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:22.560 10:53:29 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:22.560 10:53:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:22.560 ************************************ 00:08:22.560 START TEST raid_write_error_test 00:08:22.560 ************************************ 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 write 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.osgvFKDo0K 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63818 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63818 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 63818 ']' 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:22.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.560 10:53:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.561 10:53:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:22.561 10:53:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.561 [2024-11-15 10:53:29.308649] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:08:22.561 [2024-11-15 10:53:29.308777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63818 ] 00:08:22.819 [2024-11-15 10:53:29.501389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.819 [2024-11-15 10:53:29.624444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.077 [2024-11-15 10:53:29.848546] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.077 [2024-11-15 10:53:29.848590] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.336 10:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:23.336 10:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:23.336 10:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:23.336 10:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:23.336 10:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.336 10:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.336 BaseBdev1_malloc 00:08:23.336 10:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.336 10:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:23.336 10:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.336 10:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.336 true 00:08:23.336 10:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.336 10:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:23.336 10:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.336 10:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.336 [2024-11-15 10:53:30.245056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:23.336 [2024-11-15 10:53:30.245198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.336 [2024-11-15 10:53:30.245230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:23.336 [2024-11-15 10:53:30.245245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.336 [2024-11-15 10:53:30.247911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.336 [2024-11-15 10:53:30.247958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:23.336 BaseBdev1 00:08:23.336 10:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.336 10:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:23.336 10:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:23.336 10:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.336 10:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.596 BaseBdev2_malloc 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.596 true 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.596 [2024-11-15 10:53:30.317648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:23.596 [2024-11-15 10:53:30.317704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.596 [2024-11-15 10:53:30.317720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:23.596 [2024-11-15 10:53:30.317731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.596 [2024-11-15 10:53:30.319942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.596 [2024-11-15 10:53:30.320059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:23.596 BaseBdev2 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.596 [2024-11-15 10:53:30.329707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:23.596 [2024-11-15 10:53:30.331739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:23.596 [2024-11-15 10:53:30.331959] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:23.596 [2024-11-15 10:53:30.331977] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:23.596 [2024-11-15 10:53:30.332231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:23.596 [2024-11-15 10:53:30.332447] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:23.596 [2024-11-15 10:53:30.332460] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:23.596 [2024-11-15 10:53:30.332633] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.596 "name": "raid_bdev1", 00:08:23.596 "uuid": "89c1fe9a-f74c-47ff-9c1f-3953948e4de8", 00:08:23.596 "strip_size_kb": 0, 00:08:23.596 "state": "online", 00:08:23.596 "raid_level": "raid1", 00:08:23.596 "superblock": true, 00:08:23.596 "num_base_bdevs": 2, 00:08:23.596 "num_base_bdevs_discovered": 2, 00:08:23.596 "num_base_bdevs_operational": 2, 00:08:23.596 "base_bdevs_list": [ 00:08:23.596 { 00:08:23.596 "name": "BaseBdev1", 00:08:23.596 "uuid": "67db1637-3c5a-57c7-8056-dbb6f3f5bc9b", 00:08:23.596 "is_configured": true, 00:08:23.596 "data_offset": 2048, 00:08:23.596 "data_size": 63488 00:08:23.596 }, 00:08:23.596 { 00:08:23.596 "name": "BaseBdev2", 00:08:23.596 "uuid": "13d0378d-ea60-5383-a82c-992cab10080c", 00:08:23.596 "is_configured": true, 00:08:23.596 "data_offset": 2048, 00:08:23.596 "data_size": 63488 00:08:23.596 } 00:08:23.596 ] 00:08:23.596 }' 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.596 10:53:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.163 10:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:24.163 10:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:24.163 [2024-11-15 10:53:30.866125] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:25.099 10:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:25.099 10:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.099 10:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.099 [2024-11-15 10:53:31.790545] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:25.099 [2024-11-15 10:53:31.790711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:25.099 [2024-11-15 10:53:31.791003] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:25.099 10:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.100 10:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:25.100 10:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:25.100 10:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:25.100 10:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:25.100 10:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:25.100 10:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:25.100 10:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:25.100 10:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:25.100 10:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:25.100 10:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:25.100 10:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.100 10:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.100 10:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.100 10:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.100 10:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.100 10:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:25.100 10:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.100 10:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.100 10:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.100 10:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.100 "name": "raid_bdev1", 00:08:25.100 "uuid": "89c1fe9a-f74c-47ff-9c1f-3953948e4de8", 00:08:25.100 "strip_size_kb": 0, 00:08:25.100 "state": "online", 00:08:25.100 "raid_level": "raid1", 00:08:25.100 "superblock": true, 00:08:25.100 "num_base_bdevs": 2, 00:08:25.100 "num_base_bdevs_discovered": 1, 00:08:25.100 "num_base_bdevs_operational": 1, 00:08:25.100 "base_bdevs_list": [ 00:08:25.100 { 00:08:25.100 "name": null, 00:08:25.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.100 "is_configured": false, 00:08:25.100 "data_offset": 0, 00:08:25.100 "data_size": 63488 00:08:25.100 }, 00:08:25.100 { 00:08:25.100 "name": "BaseBdev2", 00:08:25.100 "uuid": "13d0378d-ea60-5383-a82c-992cab10080c", 00:08:25.100 "is_configured": true, 00:08:25.100 "data_offset": 2048, 00:08:25.100 "data_size": 63488 00:08:25.100 } 00:08:25.100 ] 00:08:25.100 }' 00:08:25.100 10:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.100 10:53:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.359 10:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:25.359 10:53:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.359 10:53:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.359 [2024-11-15 10:53:32.239656] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:25.359 [2024-11-15 10:53:32.239762] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:25.359 [2024-11-15 10:53:32.242415] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:25.359 [2024-11-15 10:53:32.242514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.359 [2024-11-15 10:53:32.242617] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:25.359 [2024-11-15 10:53:32.242676] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:25.359 { 00:08:25.359 "results": [ 00:08:25.359 { 00:08:25.359 "job": "raid_bdev1", 00:08:25.359 "core_mask": "0x1", 00:08:25.359 "workload": "randrw", 00:08:25.359 "percentage": 50, 00:08:25.359 "status": "finished", 00:08:25.359 "queue_depth": 1, 00:08:25.359 "io_size": 131072, 00:08:25.359 "runtime": 1.37399, 00:08:25.359 "iops": 19741.773957597943, 00:08:25.359 "mibps": 2467.721744699743, 00:08:25.359 "io_failed": 0, 00:08:25.359 "io_timeout": 0, 00:08:25.359 "avg_latency_us": 47.91738277825851, 00:08:25.359 "min_latency_us": 22.581659388646287, 00:08:25.359 "max_latency_us": 1438.071615720524 00:08:25.359 } 00:08:25.359 ], 00:08:25.359 "core_count": 1 00:08:25.359 } 00:08:25.359 10:53:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.359 10:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63818 00:08:25.359 10:53:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 63818 ']' 00:08:25.359 10:53:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 63818 00:08:25.359 10:53:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:08:25.359 10:53:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:25.359 10:53:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63818 00:08:25.618 killing process with pid 63818 00:08:25.618 10:53:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:25.618 10:53:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:25.618 10:53:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63818' 00:08:25.618 10:53:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 63818 00:08:25.618 [2024-11-15 10:53:32.290967] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:25.618 10:53:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 63818 00:08:25.618 [2024-11-15 10:53:32.429933] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:27.012 10:53:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.osgvFKDo0K 00:08:27.012 10:53:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:27.012 10:53:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:27.012 10:53:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:27.012 10:53:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:27.012 ************************************ 00:08:27.012 END TEST raid_write_error_test 00:08:27.012 ************************************ 00:08:27.012 10:53:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:27.012 10:53:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:27.012 10:53:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:27.012 00:08:27.012 real 0m4.441s 00:08:27.012 user 0m5.317s 00:08:27.012 sys 0m0.550s 00:08:27.012 10:53:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:27.012 10:53:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.012 10:53:33 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:27.012 10:53:33 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:27.012 10:53:33 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:27.012 10:53:33 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:27.012 10:53:33 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:27.012 10:53:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:27.012 ************************************ 00:08:27.012 START TEST raid_state_function_test 00:08:27.012 ************************************ 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 false 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:27.012 Process raid pid: 63957 00:08:27.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63957 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63957' 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63957 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 63957 ']' 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:27.012 10:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.012 [2024-11-15 10:53:33.818522] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:08:27.012 [2024-11-15 10:53:33.818736] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.271 [2024-11-15 10:53:33.977043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.271 [2024-11-15 10:53:34.097797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.529 [2024-11-15 10:53:34.305765] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.529 [2024-11-15 10:53:34.305902] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.788 10:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:27.788 10:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:08:27.788 10:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:27.788 10:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.788 10:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.788 [2024-11-15 10:53:34.665215] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:27.788 [2024-11-15 10:53:34.665270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:27.788 [2024-11-15 10:53:34.665281] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:27.788 [2024-11-15 10:53:34.665290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:27.788 [2024-11-15 10:53:34.665297] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:27.788 [2024-11-15 10:53:34.665320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:27.788 10:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.788 10:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.788 10:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.788 10:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.788 10:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.788 10:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.788 10:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.788 10:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.788 10:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.788 10:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.788 10:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.788 10:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.788 10:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.788 10:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.788 10:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.788 10:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.788 10:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.788 "name": "Existed_Raid", 00:08:27.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.788 "strip_size_kb": 64, 00:08:27.788 "state": "configuring", 00:08:27.788 "raid_level": "raid0", 00:08:27.788 "superblock": false, 00:08:27.788 "num_base_bdevs": 3, 00:08:27.788 "num_base_bdevs_discovered": 0, 00:08:27.788 "num_base_bdevs_operational": 3, 00:08:27.788 "base_bdevs_list": [ 00:08:27.788 { 00:08:27.788 "name": "BaseBdev1", 00:08:27.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.788 "is_configured": false, 00:08:27.788 "data_offset": 0, 00:08:27.788 "data_size": 0 00:08:27.788 }, 00:08:27.788 { 00:08:27.788 "name": "BaseBdev2", 00:08:27.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.788 "is_configured": false, 00:08:27.788 "data_offset": 0, 00:08:27.788 "data_size": 0 00:08:27.788 }, 00:08:27.788 { 00:08:27.788 "name": "BaseBdev3", 00:08:27.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.788 "is_configured": false, 00:08:27.788 "data_offset": 0, 00:08:27.788 "data_size": 0 00:08:27.788 } 00:08:27.788 ] 00:08:27.788 }' 00:08:27.788 10:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.788 10:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.354 [2024-11-15 10:53:35.096468] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:28.354 [2024-11-15 10:53:35.096563] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.354 [2024-11-15 10:53:35.104440] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:28.354 [2024-11-15 10:53:35.104533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:28.354 [2024-11-15 10:53:35.104580] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:28.354 [2024-11-15 10:53:35.104624] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:28.354 [2024-11-15 10:53:35.104664] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:28.354 [2024-11-15 10:53:35.104699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.354 [2024-11-15 10:53:35.150919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:28.354 BaseBdev1 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.354 [ 00:08:28.354 { 00:08:28.354 "name": "BaseBdev1", 00:08:28.354 "aliases": [ 00:08:28.354 "228af97b-1b95-431f-8416-1186fd269396" 00:08:28.354 ], 00:08:28.354 "product_name": "Malloc disk", 00:08:28.354 "block_size": 512, 00:08:28.354 "num_blocks": 65536, 00:08:28.354 "uuid": "228af97b-1b95-431f-8416-1186fd269396", 00:08:28.354 "assigned_rate_limits": { 00:08:28.354 "rw_ios_per_sec": 0, 00:08:28.354 "rw_mbytes_per_sec": 0, 00:08:28.354 "r_mbytes_per_sec": 0, 00:08:28.354 "w_mbytes_per_sec": 0 00:08:28.354 }, 00:08:28.354 "claimed": true, 00:08:28.354 "claim_type": "exclusive_write", 00:08:28.354 "zoned": false, 00:08:28.354 "supported_io_types": { 00:08:28.354 "read": true, 00:08:28.354 "write": true, 00:08:28.354 "unmap": true, 00:08:28.354 "flush": true, 00:08:28.354 "reset": true, 00:08:28.354 "nvme_admin": false, 00:08:28.354 "nvme_io": false, 00:08:28.354 "nvme_io_md": false, 00:08:28.354 "write_zeroes": true, 00:08:28.354 "zcopy": true, 00:08:28.354 "get_zone_info": false, 00:08:28.354 "zone_management": false, 00:08:28.354 "zone_append": false, 00:08:28.354 "compare": false, 00:08:28.354 "compare_and_write": false, 00:08:28.354 "abort": true, 00:08:28.354 "seek_hole": false, 00:08:28.354 "seek_data": false, 00:08:28.354 "copy": true, 00:08:28.354 "nvme_iov_md": false 00:08:28.354 }, 00:08:28.354 "memory_domains": [ 00:08:28.354 { 00:08:28.354 "dma_device_id": "system", 00:08:28.354 "dma_device_type": 1 00:08:28.354 }, 00:08:28.354 { 00:08:28.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.354 "dma_device_type": 2 00:08:28.354 } 00:08:28.354 ], 00:08:28.354 "driver_specific": {} 00:08:28.354 } 00:08:28.354 ] 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.354 "name": "Existed_Raid", 00:08:28.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.354 "strip_size_kb": 64, 00:08:28.354 "state": "configuring", 00:08:28.354 "raid_level": "raid0", 00:08:28.354 "superblock": false, 00:08:28.354 "num_base_bdevs": 3, 00:08:28.354 "num_base_bdevs_discovered": 1, 00:08:28.354 "num_base_bdevs_operational": 3, 00:08:28.354 "base_bdevs_list": [ 00:08:28.354 { 00:08:28.354 "name": "BaseBdev1", 00:08:28.354 "uuid": "228af97b-1b95-431f-8416-1186fd269396", 00:08:28.354 "is_configured": true, 00:08:28.354 "data_offset": 0, 00:08:28.354 "data_size": 65536 00:08:28.354 }, 00:08:28.354 { 00:08:28.354 "name": "BaseBdev2", 00:08:28.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.354 "is_configured": false, 00:08:28.354 "data_offset": 0, 00:08:28.354 "data_size": 0 00:08:28.354 }, 00:08:28.354 { 00:08:28.354 "name": "BaseBdev3", 00:08:28.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.354 "is_configured": false, 00:08:28.354 "data_offset": 0, 00:08:28.354 "data_size": 0 00:08:28.354 } 00:08:28.354 ] 00:08:28.354 }' 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.354 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.922 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:28.922 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.922 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.922 [2024-11-15 10:53:35.610185] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:28.922 [2024-11-15 10:53:35.610242] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:28.922 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.922 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:28.922 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.922 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.922 [2024-11-15 10:53:35.622208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:28.922 [2024-11-15 10:53:35.624257] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:28.922 [2024-11-15 10:53:35.624321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:28.922 [2024-11-15 10:53:35.624333] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:28.922 [2024-11-15 10:53:35.624344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:28.922 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.922 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:28.922 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:28.922 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:28.922 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.922 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.922 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.922 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.922 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.922 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.922 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.922 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.922 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.922 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.922 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.922 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.922 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.922 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.922 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.922 "name": "Existed_Raid", 00:08:28.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.922 "strip_size_kb": 64, 00:08:28.922 "state": "configuring", 00:08:28.922 "raid_level": "raid0", 00:08:28.922 "superblock": false, 00:08:28.922 "num_base_bdevs": 3, 00:08:28.922 "num_base_bdevs_discovered": 1, 00:08:28.922 "num_base_bdevs_operational": 3, 00:08:28.922 "base_bdevs_list": [ 00:08:28.922 { 00:08:28.922 "name": "BaseBdev1", 00:08:28.922 "uuid": "228af97b-1b95-431f-8416-1186fd269396", 00:08:28.922 "is_configured": true, 00:08:28.922 "data_offset": 0, 00:08:28.922 "data_size": 65536 00:08:28.922 }, 00:08:28.922 { 00:08:28.922 "name": "BaseBdev2", 00:08:28.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.922 "is_configured": false, 00:08:28.922 "data_offset": 0, 00:08:28.922 "data_size": 0 00:08:28.922 }, 00:08:28.922 { 00:08:28.922 "name": "BaseBdev3", 00:08:28.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.922 "is_configured": false, 00:08:28.922 "data_offset": 0, 00:08:28.922 "data_size": 0 00:08:28.922 } 00:08:28.922 ] 00:08:28.922 }' 00:08:28.922 10:53:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.923 10:53:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.181 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:29.181 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.181 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.441 [2024-11-15 10:53:36.124350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:29.441 BaseBdev2 00:08:29.441 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.441 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:29.441 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:29.441 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:29.441 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:29.441 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:29.441 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:29.441 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:29.441 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.441 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.441 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.441 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:29.441 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.441 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.441 [ 00:08:29.441 { 00:08:29.441 "name": "BaseBdev2", 00:08:29.441 "aliases": [ 00:08:29.441 "2d7a4b6c-00e9-49fd-b2bb-1bd555fdb2e7" 00:08:29.441 ], 00:08:29.441 "product_name": "Malloc disk", 00:08:29.441 "block_size": 512, 00:08:29.441 "num_blocks": 65536, 00:08:29.441 "uuid": "2d7a4b6c-00e9-49fd-b2bb-1bd555fdb2e7", 00:08:29.441 "assigned_rate_limits": { 00:08:29.441 "rw_ios_per_sec": 0, 00:08:29.441 "rw_mbytes_per_sec": 0, 00:08:29.441 "r_mbytes_per_sec": 0, 00:08:29.441 "w_mbytes_per_sec": 0 00:08:29.441 }, 00:08:29.441 "claimed": true, 00:08:29.441 "claim_type": "exclusive_write", 00:08:29.441 "zoned": false, 00:08:29.441 "supported_io_types": { 00:08:29.441 "read": true, 00:08:29.441 "write": true, 00:08:29.441 "unmap": true, 00:08:29.441 "flush": true, 00:08:29.441 "reset": true, 00:08:29.441 "nvme_admin": false, 00:08:29.441 "nvme_io": false, 00:08:29.441 "nvme_io_md": false, 00:08:29.441 "write_zeroes": true, 00:08:29.441 "zcopy": true, 00:08:29.441 "get_zone_info": false, 00:08:29.441 "zone_management": false, 00:08:29.441 "zone_append": false, 00:08:29.441 "compare": false, 00:08:29.441 "compare_and_write": false, 00:08:29.441 "abort": true, 00:08:29.441 "seek_hole": false, 00:08:29.441 "seek_data": false, 00:08:29.441 "copy": true, 00:08:29.441 "nvme_iov_md": false 00:08:29.441 }, 00:08:29.441 "memory_domains": [ 00:08:29.441 { 00:08:29.441 "dma_device_id": "system", 00:08:29.441 "dma_device_type": 1 00:08:29.441 }, 00:08:29.441 { 00:08:29.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.441 "dma_device_type": 2 00:08:29.441 } 00:08:29.441 ], 00:08:29.441 "driver_specific": {} 00:08:29.441 } 00:08:29.441 ] 00:08:29.441 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.441 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:29.441 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:29.441 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:29.441 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:29.442 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.442 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.442 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.442 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.442 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.442 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.442 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.442 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.442 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.442 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.442 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.442 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.442 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.442 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.442 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.442 "name": "Existed_Raid", 00:08:29.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.442 "strip_size_kb": 64, 00:08:29.442 "state": "configuring", 00:08:29.442 "raid_level": "raid0", 00:08:29.442 "superblock": false, 00:08:29.442 "num_base_bdevs": 3, 00:08:29.442 "num_base_bdevs_discovered": 2, 00:08:29.442 "num_base_bdevs_operational": 3, 00:08:29.442 "base_bdevs_list": [ 00:08:29.442 { 00:08:29.442 "name": "BaseBdev1", 00:08:29.442 "uuid": "228af97b-1b95-431f-8416-1186fd269396", 00:08:29.442 "is_configured": true, 00:08:29.442 "data_offset": 0, 00:08:29.442 "data_size": 65536 00:08:29.442 }, 00:08:29.442 { 00:08:29.442 "name": "BaseBdev2", 00:08:29.442 "uuid": "2d7a4b6c-00e9-49fd-b2bb-1bd555fdb2e7", 00:08:29.442 "is_configured": true, 00:08:29.442 "data_offset": 0, 00:08:29.442 "data_size": 65536 00:08:29.442 }, 00:08:29.442 { 00:08:29.442 "name": "BaseBdev3", 00:08:29.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.442 "is_configured": false, 00:08:29.442 "data_offset": 0, 00:08:29.442 "data_size": 0 00:08:29.442 } 00:08:29.442 ] 00:08:29.442 }' 00:08:29.442 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.442 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.701 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:29.701 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.701 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.962 [2024-11-15 10:53:36.671535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:29.962 [2024-11-15 10:53:36.671582] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:29.962 [2024-11-15 10:53:36.671595] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:29.962 [2024-11-15 10:53:36.671866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:29.962 [2024-11-15 10:53:36.672048] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:29.962 [2024-11-15 10:53:36.672057] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:29.962 [2024-11-15 10:53:36.672375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.962 BaseBdev3 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.962 [ 00:08:29.962 { 00:08:29.962 "name": "BaseBdev3", 00:08:29.962 "aliases": [ 00:08:29.962 "09a29222-2fb6-4e0b-a757-7e9fab45223c" 00:08:29.962 ], 00:08:29.962 "product_name": "Malloc disk", 00:08:29.962 "block_size": 512, 00:08:29.962 "num_blocks": 65536, 00:08:29.962 "uuid": "09a29222-2fb6-4e0b-a757-7e9fab45223c", 00:08:29.962 "assigned_rate_limits": { 00:08:29.962 "rw_ios_per_sec": 0, 00:08:29.962 "rw_mbytes_per_sec": 0, 00:08:29.962 "r_mbytes_per_sec": 0, 00:08:29.962 "w_mbytes_per_sec": 0 00:08:29.962 }, 00:08:29.962 "claimed": true, 00:08:29.962 "claim_type": "exclusive_write", 00:08:29.962 "zoned": false, 00:08:29.962 "supported_io_types": { 00:08:29.962 "read": true, 00:08:29.962 "write": true, 00:08:29.962 "unmap": true, 00:08:29.962 "flush": true, 00:08:29.962 "reset": true, 00:08:29.962 "nvme_admin": false, 00:08:29.962 "nvme_io": false, 00:08:29.962 "nvme_io_md": false, 00:08:29.962 "write_zeroes": true, 00:08:29.962 "zcopy": true, 00:08:29.962 "get_zone_info": false, 00:08:29.962 "zone_management": false, 00:08:29.962 "zone_append": false, 00:08:29.962 "compare": false, 00:08:29.962 "compare_and_write": false, 00:08:29.962 "abort": true, 00:08:29.962 "seek_hole": false, 00:08:29.962 "seek_data": false, 00:08:29.962 "copy": true, 00:08:29.962 "nvme_iov_md": false 00:08:29.962 }, 00:08:29.962 "memory_domains": [ 00:08:29.962 { 00:08:29.962 "dma_device_id": "system", 00:08:29.962 "dma_device_type": 1 00:08:29.962 }, 00:08:29.962 { 00:08:29.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.962 "dma_device_type": 2 00:08:29.962 } 00:08:29.962 ], 00:08:29.962 "driver_specific": {} 00:08:29.962 } 00:08:29.962 ] 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.962 "name": "Existed_Raid", 00:08:29.962 "uuid": "000ce046-8d9c-4108-b838-c5b8a46c4447", 00:08:29.962 "strip_size_kb": 64, 00:08:29.962 "state": "online", 00:08:29.962 "raid_level": "raid0", 00:08:29.962 "superblock": false, 00:08:29.962 "num_base_bdevs": 3, 00:08:29.962 "num_base_bdevs_discovered": 3, 00:08:29.962 "num_base_bdevs_operational": 3, 00:08:29.962 "base_bdevs_list": [ 00:08:29.962 { 00:08:29.962 "name": "BaseBdev1", 00:08:29.962 "uuid": "228af97b-1b95-431f-8416-1186fd269396", 00:08:29.962 "is_configured": true, 00:08:29.962 "data_offset": 0, 00:08:29.962 "data_size": 65536 00:08:29.962 }, 00:08:29.962 { 00:08:29.962 "name": "BaseBdev2", 00:08:29.962 "uuid": "2d7a4b6c-00e9-49fd-b2bb-1bd555fdb2e7", 00:08:29.962 "is_configured": true, 00:08:29.962 "data_offset": 0, 00:08:29.962 "data_size": 65536 00:08:29.962 }, 00:08:29.962 { 00:08:29.962 "name": "BaseBdev3", 00:08:29.962 "uuid": "09a29222-2fb6-4e0b-a757-7e9fab45223c", 00:08:29.962 "is_configured": true, 00:08:29.962 "data_offset": 0, 00:08:29.962 "data_size": 65536 00:08:29.962 } 00:08:29.962 ] 00:08:29.962 }' 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.962 10:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:30.590 [2024-11-15 10:53:37.175090] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:30.590 "name": "Existed_Raid", 00:08:30.590 "aliases": [ 00:08:30.590 "000ce046-8d9c-4108-b838-c5b8a46c4447" 00:08:30.590 ], 00:08:30.590 "product_name": "Raid Volume", 00:08:30.590 "block_size": 512, 00:08:30.590 "num_blocks": 196608, 00:08:30.590 "uuid": "000ce046-8d9c-4108-b838-c5b8a46c4447", 00:08:30.590 "assigned_rate_limits": { 00:08:30.590 "rw_ios_per_sec": 0, 00:08:30.590 "rw_mbytes_per_sec": 0, 00:08:30.590 "r_mbytes_per_sec": 0, 00:08:30.590 "w_mbytes_per_sec": 0 00:08:30.590 }, 00:08:30.590 "claimed": false, 00:08:30.590 "zoned": false, 00:08:30.590 "supported_io_types": { 00:08:30.590 "read": true, 00:08:30.590 "write": true, 00:08:30.590 "unmap": true, 00:08:30.590 "flush": true, 00:08:30.590 "reset": true, 00:08:30.590 "nvme_admin": false, 00:08:30.590 "nvme_io": false, 00:08:30.590 "nvme_io_md": false, 00:08:30.590 "write_zeroes": true, 00:08:30.590 "zcopy": false, 00:08:30.590 "get_zone_info": false, 00:08:30.590 "zone_management": false, 00:08:30.590 "zone_append": false, 00:08:30.590 "compare": false, 00:08:30.590 "compare_and_write": false, 00:08:30.590 "abort": false, 00:08:30.590 "seek_hole": false, 00:08:30.590 "seek_data": false, 00:08:30.590 "copy": false, 00:08:30.590 "nvme_iov_md": false 00:08:30.590 }, 00:08:30.590 "memory_domains": [ 00:08:30.590 { 00:08:30.590 "dma_device_id": "system", 00:08:30.590 "dma_device_type": 1 00:08:30.590 }, 00:08:30.590 { 00:08:30.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.590 "dma_device_type": 2 00:08:30.590 }, 00:08:30.590 { 00:08:30.590 "dma_device_id": "system", 00:08:30.590 "dma_device_type": 1 00:08:30.590 }, 00:08:30.590 { 00:08:30.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.590 "dma_device_type": 2 00:08:30.590 }, 00:08:30.590 { 00:08:30.590 "dma_device_id": "system", 00:08:30.590 "dma_device_type": 1 00:08:30.590 }, 00:08:30.590 { 00:08:30.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.590 "dma_device_type": 2 00:08:30.590 } 00:08:30.590 ], 00:08:30.590 "driver_specific": { 00:08:30.590 "raid": { 00:08:30.590 "uuid": "000ce046-8d9c-4108-b838-c5b8a46c4447", 00:08:30.590 "strip_size_kb": 64, 00:08:30.590 "state": "online", 00:08:30.590 "raid_level": "raid0", 00:08:30.590 "superblock": false, 00:08:30.590 "num_base_bdevs": 3, 00:08:30.590 "num_base_bdevs_discovered": 3, 00:08:30.590 "num_base_bdevs_operational": 3, 00:08:30.590 "base_bdevs_list": [ 00:08:30.590 { 00:08:30.590 "name": "BaseBdev1", 00:08:30.590 "uuid": "228af97b-1b95-431f-8416-1186fd269396", 00:08:30.590 "is_configured": true, 00:08:30.590 "data_offset": 0, 00:08:30.590 "data_size": 65536 00:08:30.590 }, 00:08:30.590 { 00:08:30.590 "name": "BaseBdev2", 00:08:30.590 "uuid": "2d7a4b6c-00e9-49fd-b2bb-1bd555fdb2e7", 00:08:30.590 "is_configured": true, 00:08:30.590 "data_offset": 0, 00:08:30.590 "data_size": 65536 00:08:30.590 }, 00:08:30.590 { 00:08:30.590 "name": "BaseBdev3", 00:08:30.590 "uuid": "09a29222-2fb6-4e0b-a757-7e9fab45223c", 00:08:30.590 "is_configured": true, 00:08:30.590 "data_offset": 0, 00:08:30.590 "data_size": 65536 00:08:30.590 } 00:08:30.590 ] 00:08:30.590 } 00:08:30.590 } 00:08:30.590 }' 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:30.590 BaseBdev2 00:08:30.590 BaseBdev3' 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.590 10:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.590 [2024-11-15 10:53:37.462382] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:30.590 [2024-11-15 10:53:37.462415] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:30.590 [2024-11-15 10:53:37.462473] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.848 10:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.848 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:30.848 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:30.848 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:30.848 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:30.848 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:30.848 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:30.848 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.848 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:30.848 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.848 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.848 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:30.848 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.848 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.848 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.848 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.848 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.848 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.848 10:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.848 10:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.848 10:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.848 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.848 "name": "Existed_Raid", 00:08:30.848 "uuid": "000ce046-8d9c-4108-b838-c5b8a46c4447", 00:08:30.848 "strip_size_kb": 64, 00:08:30.848 "state": "offline", 00:08:30.848 "raid_level": "raid0", 00:08:30.848 "superblock": false, 00:08:30.848 "num_base_bdevs": 3, 00:08:30.848 "num_base_bdevs_discovered": 2, 00:08:30.848 "num_base_bdevs_operational": 2, 00:08:30.848 "base_bdevs_list": [ 00:08:30.848 { 00:08:30.848 "name": null, 00:08:30.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.848 "is_configured": false, 00:08:30.848 "data_offset": 0, 00:08:30.848 "data_size": 65536 00:08:30.848 }, 00:08:30.848 { 00:08:30.848 "name": "BaseBdev2", 00:08:30.848 "uuid": "2d7a4b6c-00e9-49fd-b2bb-1bd555fdb2e7", 00:08:30.848 "is_configured": true, 00:08:30.848 "data_offset": 0, 00:08:30.848 "data_size": 65536 00:08:30.848 }, 00:08:30.848 { 00:08:30.848 "name": "BaseBdev3", 00:08:30.848 "uuid": "09a29222-2fb6-4e0b-a757-7e9fab45223c", 00:08:30.848 "is_configured": true, 00:08:30.848 "data_offset": 0, 00:08:30.848 "data_size": 65536 00:08:30.848 } 00:08:30.848 ] 00:08:30.848 }' 00:08:30.848 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.848 10:53:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.106 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:31.106 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:31.106 10:53:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.106 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.106 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:31.106 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.106 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.364 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:31.364 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:31.364 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:31.364 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.364 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.364 [2024-11-15 10:53:38.054195] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:31.364 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.364 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:31.364 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:31.364 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.364 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:31.364 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.364 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.364 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.364 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:31.364 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:31.364 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:31.364 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.364 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.364 [2024-11-15 10:53:38.207263] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:31.364 [2024-11-15 10:53:38.207348] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.622 BaseBdev2 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.622 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.622 [ 00:08:31.622 { 00:08:31.622 "name": "BaseBdev2", 00:08:31.622 "aliases": [ 00:08:31.622 "d544aca8-c836-458e-8d78-c52e91ad26f3" 00:08:31.622 ], 00:08:31.622 "product_name": "Malloc disk", 00:08:31.622 "block_size": 512, 00:08:31.622 "num_blocks": 65536, 00:08:31.622 "uuid": "d544aca8-c836-458e-8d78-c52e91ad26f3", 00:08:31.622 "assigned_rate_limits": { 00:08:31.622 "rw_ios_per_sec": 0, 00:08:31.622 "rw_mbytes_per_sec": 0, 00:08:31.622 "r_mbytes_per_sec": 0, 00:08:31.622 "w_mbytes_per_sec": 0 00:08:31.622 }, 00:08:31.622 "claimed": false, 00:08:31.622 "zoned": false, 00:08:31.622 "supported_io_types": { 00:08:31.622 "read": true, 00:08:31.622 "write": true, 00:08:31.622 "unmap": true, 00:08:31.622 "flush": true, 00:08:31.622 "reset": true, 00:08:31.622 "nvme_admin": false, 00:08:31.622 "nvme_io": false, 00:08:31.622 "nvme_io_md": false, 00:08:31.622 "write_zeroes": true, 00:08:31.622 "zcopy": true, 00:08:31.622 "get_zone_info": false, 00:08:31.623 "zone_management": false, 00:08:31.623 "zone_append": false, 00:08:31.623 "compare": false, 00:08:31.623 "compare_and_write": false, 00:08:31.623 "abort": true, 00:08:31.623 "seek_hole": false, 00:08:31.623 "seek_data": false, 00:08:31.623 "copy": true, 00:08:31.623 "nvme_iov_md": false 00:08:31.623 }, 00:08:31.623 "memory_domains": [ 00:08:31.623 { 00:08:31.623 "dma_device_id": "system", 00:08:31.623 "dma_device_type": 1 00:08:31.623 }, 00:08:31.623 { 00:08:31.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.623 "dma_device_type": 2 00:08:31.623 } 00:08:31.623 ], 00:08:31.623 "driver_specific": {} 00:08:31.623 } 00:08:31.623 ] 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.623 BaseBdev3 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.623 [ 00:08:31.623 { 00:08:31.623 "name": "BaseBdev3", 00:08:31.623 "aliases": [ 00:08:31.623 "13a69fdd-e711-46a5-9d5c-a03caeeaace8" 00:08:31.623 ], 00:08:31.623 "product_name": "Malloc disk", 00:08:31.623 "block_size": 512, 00:08:31.623 "num_blocks": 65536, 00:08:31.623 "uuid": "13a69fdd-e711-46a5-9d5c-a03caeeaace8", 00:08:31.623 "assigned_rate_limits": { 00:08:31.623 "rw_ios_per_sec": 0, 00:08:31.623 "rw_mbytes_per_sec": 0, 00:08:31.623 "r_mbytes_per_sec": 0, 00:08:31.623 "w_mbytes_per_sec": 0 00:08:31.623 }, 00:08:31.623 "claimed": false, 00:08:31.623 "zoned": false, 00:08:31.623 "supported_io_types": { 00:08:31.623 "read": true, 00:08:31.623 "write": true, 00:08:31.623 "unmap": true, 00:08:31.623 "flush": true, 00:08:31.623 "reset": true, 00:08:31.623 "nvme_admin": false, 00:08:31.623 "nvme_io": false, 00:08:31.623 "nvme_io_md": false, 00:08:31.623 "write_zeroes": true, 00:08:31.623 "zcopy": true, 00:08:31.623 "get_zone_info": false, 00:08:31.623 "zone_management": false, 00:08:31.623 "zone_append": false, 00:08:31.623 "compare": false, 00:08:31.623 "compare_and_write": false, 00:08:31.623 "abort": true, 00:08:31.623 "seek_hole": false, 00:08:31.623 "seek_data": false, 00:08:31.623 "copy": true, 00:08:31.623 "nvme_iov_md": false 00:08:31.623 }, 00:08:31.623 "memory_domains": [ 00:08:31.623 { 00:08:31.623 "dma_device_id": "system", 00:08:31.623 "dma_device_type": 1 00:08:31.623 }, 00:08:31.623 { 00:08:31.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.623 "dma_device_type": 2 00:08:31.623 } 00:08:31.623 ], 00:08:31.623 "driver_specific": {} 00:08:31.623 } 00:08:31.623 ] 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.623 [2024-11-15 10:53:38.523633] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:31.623 [2024-11-15 10:53:38.523724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:31.623 [2024-11-15 10:53:38.523808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:31.623 [2024-11-15 10:53:38.525836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.623 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.881 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.881 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.881 "name": "Existed_Raid", 00:08:31.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.881 "strip_size_kb": 64, 00:08:31.881 "state": "configuring", 00:08:31.881 "raid_level": "raid0", 00:08:31.881 "superblock": false, 00:08:31.881 "num_base_bdevs": 3, 00:08:31.881 "num_base_bdevs_discovered": 2, 00:08:31.881 "num_base_bdevs_operational": 3, 00:08:31.881 "base_bdevs_list": [ 00:08:31.881 { 00:08:31.881 "name": "BaseBdev1", 00:08:31.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.881 "is_configured": false, 00:08:31.881 "data_offset": 0, 00:08:31.881 "data_size": 0 00:08:31.881 }, 00:08:31.881 { 00:08:31.881 "name": "BaseBdev2", 00:08:31.881 "uuid": "d544aca8-c836-458e-8d78-c52e91ad26f3", 00:08:31.881 "is_configured": true, 00:08:31.881 "data_offset": 0, 00:08:31.881 "data_size": 65536 00:08:31.881 }, 00:08:31.881 { 00:08:31.881 "name": "BaseBdev3", 00:08:31.881 "uuid": "13a69fdd-e711-46a5-9d5c-a03caeeaace8", 00:08:31.881 "is_configured": true, 00:08:31.881 "data_offset": 0, 00:08:31.881 "data_size": 65536 00:08:31.881 } 00:08:31.881 ] 00:08:31.881 }' 00:08:31.881 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.881 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.139 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:32.139 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.139 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.139 [2024-11-15 10:53:38.974877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:32.139 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.139 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:32.139 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.139 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.139 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.139 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.139 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.139 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.139 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.139 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.139 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.139 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.139 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.139 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.139 10:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.139 10:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.139 10:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.139 "name": "Existed_Raid", 00:08:32.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.139 "strip_size_kb": 64, 00:08:32.139 "state": "configuring", 00:08:32.139 "raid_level": "raid0", 00:08:32.139 "superblock": false, 00:08:32.139 "num_base_bdevs": 3, 00:08:32.139 "num_base_bdevs_discovered": 1, 00:08:32.139 "num_base_bdevs_operational": 3, 00:08:32.139 "base_bdevs_list": [ 00:08:32.139 { 00:08:32.139 "name": "BaseBdev1", 00:08:32.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.139 "is_configured": false, 00:08:32.139 "data_offset": 0, 00:08:32.139 "data_size": 0 00:08:32.139 }, 00:08:32.139 { 00:08:32.139 "name": null, 00:08:32.139 "uuid": "d544aca8-c836-458e-8d78-c52e91ad26f3", 00:08:32.139 "is_configured": false, 00:08:32.139 "data_offset": 0, 00:08:32.139 "data_size": 65536 00:08:32.139 }, 00:08:32.139 { 00:08:32.139 "name": "BaseBdev3", 00:08:32.139 "uuid": "13a69fdd-e711-46a5-9d5c-a03caeeaace8", 00:08:32.139 "is_configured": true, 00:08:32.139 "data_offset": 0, 00:08:32.139 "data_size": 65536 00:08:32.139 } 00:08:32.139 ] 00:08:32.139 }' 00:08:32.139 10:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.139 10:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.704 [2024-11-15 10:53:39.546781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.704 BaseBdev1 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.704 [ 00:08:32.704 { 00:08:32.704 "name": "BaseBdev1", 00:08:32.704 "aliases": [ 00:08:32.704 "ded5a2ae-4fd2-40c9-b962-e6bd318059f4" 00:08:32.704 ], 00:08:32.704 "product_name": "Malloc disk", 00:08:32.704 "block_size": 512, 00:08:32.704 "num_blocks": 65536, 00:08:32.704 "uuid": "ded5a2ae-4fd2-40c9-b962-e6bd318059f4", 00:08:32.704 "assigned_rate_limits": { 00:08:32.704 "rw_ios_per_sec": 0, 00:08:32.704 "rw_mbytes_per_sec": 0, 00:08:32.704 "r_mbytes_per_sec": 0, 00:08:32.704 "w_mbytes_per_sec": 0 00:08:32.704 }, 00:08:32.704 "claimed": true, 00:08:32.704 "claim_type": "exclusive_write", 00:08:32.704 "zoned": false, 00:08:32.704 "supported_io_types": { 00:08:32.704 "read": true, 00:08:32.704 "write": true, 00:08:32.704 "unmap": true, 00:08:32.704 "flush": true, 00:08:32.704 "reset": true, 00:08:32.704 "nvme_admin": false, 00:08:32.704 "nvme_io": false, 00:08:32.704 "nvme_io_md": false, 00:08:32.704 "write_zeroes": true, 00:08:32.704 "zcopy": true, 00:08:32.704 "get_zone_info": false, 00:08:32.704 "zone_management": false, 00:08:32.704 "zone_append": false, 00:08:32.704 "compare": false, 00:08:32.704 "compare_and_write": false, 00:08:32.704 "abort": true, 00:08:32.704 "seek_hole": false, 00:08:32.704 "seek_data": false, 00:08:32.704 "copy": true, 00:08:32.704 "nvme_iov_md": false 00:08:32.704 }, 00:08:32.704 "memory_domains": [ 00:08:32.704 { 00:08:32.704 "dma_device_id": "system", 00:08:32.704 "dma_device_type": 1 00:08:32.704 }, 00:08:32.704 { 00:08:32.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.704 "dma_device_type": 2 00:08:32.704 } 00:08:32.704 ], 00:08:32.704 "driver_specific": {} 00:08:32.704 } 00:08:32.704 ] 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.704 10:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.705 10:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.705 10:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.705 10:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.705 10:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.705 10:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.705 10:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.961 10:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.961 "name": "Existed_Raid", 00:08:32.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.962 "strip_size_kb": 64, 00:08:32.962 "state": "configuring", 00:08:32.962 "raid_level": "raid0", 00:08:32.962 "superblock": false, 00:08:32.962 "num_base_bdevs": 3, 00:08:32.962 "num_base_bdevs_discovered": 2, 00:08:32.962 "num_base_bdevs_operational": 3, 00:08:32.962 "base_bdevs_list": [ 00:08:32.962 { 00:08:32.962 "name": "BaseBdev1", 00:08:32.962 "uuid": "ded5a2ae-4fd2-40c9-b962-e6bd318059f4", 00:08:32.962 "is_configured": true, 00:08:32.962 "data_offset": 0, 00:08:32.962 "data_size": 65536 00:08:32.962 }, 00:08:32.962 { 00:08:32.962 "name": null, 00:08:32.962 "uuid": "d544aca8-c836-458e-8d78-c52e91ad26f3", 00:08:32.962 "is_configured": false, 00:08:32.962 "data_offset": 0, 00:08:32.962 "data_size": 65536 00:08:32.962 }, 00:08:32.962 { 00:08:32.962 "name": "BaseBdev3", 00:08:32.962 "uuid": "13a69fdd-e711-46a5-9d5c-a03caeeaace8", 00:08:32.962 "is_configured": true, 00:08:32.962 "data_offset": 0, 00:08:32.962 "data_size": 65536 00:08:32.962 } 00:08:32.962 ] 00:08:32.962 }' 00:08:32.962 10:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.962 10:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.220 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.220 10:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.220 10:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.220 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:33.220 10:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.220 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:33.220 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:33.220 10:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.220 10:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.220 [2024-11-15 10:53:40.085904] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:33.220 10:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.220 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:33.220 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.220 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.220 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.220 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.220 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.220 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.220 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.220 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.220 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.220 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.220 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.220 10:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.220 10:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.220 10:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.220 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.220 "name": "Existed_Raid", 00:08:33.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.220 "strip_size_kb": 64, 00:08:33.220 "state": "configuring", 00:08:33.220 "raid_level": "raid0", 00:08:33.220 "superblock": false, 00:08:33.220 "num_base_bdevs": 3, 00:08:33.220 "num_base_bdevs_discovered": 1, 00:08:33.220 "num_base_bdevs_operational": 3, 00:08:33.220 "base_bdevs_list": [ 00:08:33.220 { 00:08:33.220 "name": "BaseBdev1", 00:08:33.220 "uuid": "ded5a2ae-4fd2-40c9-b962-e6bd318059f4", 00:08:33.220 "is_configured": true, 00:08:33.220 "data_offset": 0, 00:08:33.220 "data_size": 65536 00:08:33.220 }, 00:08:33.220 { 00:08:33.220 "name": null, 00:08:33.220 "uuid": "d544aca8-c836-458e-8d78-c52e91ad26f3", 00:08:33.220 "is_configured": false, 00:08:33.220 "data_offset": 0, 00:08:33.220 "data_size": 65536 00:08:33.220 }, 00:08:33.220 { 00:08:33.220 "name": null, 00:08:33.220 "uuid": "13a69fdd-e711-46a5-9d5c-a03caeeaace8", 00:08:33.220 "is_configured": false, 00:08:33.220 "data_offset": 0, 00:08:33.220 "data_size": 65536 00:08:33.220 } 00:08:33.220 ] 00:08:33.220 }' 00:08:33.220 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.220 10:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.788 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.788 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:33.788 10:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.788 10:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.788 10:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.788 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:33.788 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:33.788 10:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.788 10:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.788 [2024-11-15 10:53:40.585116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:33.788 10:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.788 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:33.788 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.788 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.788 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.788 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.788 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.788 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.788 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.788 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.788 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.788 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.788 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.788 10:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.788 10:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.788 10:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.788 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.788 "name": "Existed_Raid", 00:08:33.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.788 "strip_size_kb": 64, 00:08:33.788 "state": "configuring", 00:08:33.788 "raid_level": "raid0", 00:08:33.788 "superblock": false, 00:08:33.788 "num_base_bdevs": 3, 00:08:33.788 "num_base_bdevs_discovered": 2, 00:08:33.788 "num_base_bdevs_operational": 3, 00:08:33.788 "base_bdevs_list": [ 00:08:33.788 { 00:08:33.788 "name": "BaseBdev1", 00:08:33.788 "uuid": "ded5a2ae-4fd2-40c9-b962-e6bd318059f4", 00:08:33.788 "is_configured": true, 00:08:33.788 "data_offset": 0, 00:08:33.788 "data_size": 65536 00:08:33.788 }, 00:08:33.788 { 00:08:33.788 "name": null, 00:08:33.788 "uuid": "d544aca8-c836-458e-8d78-c52e91ad26f3", 00:08:33.788 "is_configured": false, 00:08:33.788 "data_offset": 0, 00:08:33.788 "data_size": 65536 00:08:33.788 }, 00:08:33.788 { 00:08:33.788 "name": "BaseBdev3", 00:08:33.788 "uuid": "13a69fdd-e711-46a5-9d5c-a03caeeaace8", 00:08:33.788 "is_configured": true, 00:08:33.788 "data_offset": 0, 00:08:33.788 "data_size": 65536 00:08:33.788 } 00:08:33.788 ] 00:08:33.788 }' 00:08:33.788 10:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.788 10:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.354 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.354 10:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.354 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:34.354 10:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.354 10:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.354 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:34.354 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:34.354 10:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.354 10:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.354 [2024-11-15 10:53:41.108253] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:34.354 10:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.354 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:34.354 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.354 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.354 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.354 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.354 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.354 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.354 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.354 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.354 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.354 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.354 10:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.354 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.354 10:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.354 10:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.354 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.354 "name": "Existed_Raid", 00:08:34.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.354 "strip_size_kb": 64, 00:08:34.354 "state": "configuring", 00:08:34.354 "raid_level": "raid0", 00:08:34.354 "superblock": false, 00:08:34.354 "num_base_bdevs": 3, 00:08:34.354 "num_base_bdevs_discovered": 1, 00:08:34.354 "num_base_bdevs_operational": 3, 00:08:34.354 "base_bdevs_list": [ 00:08:34.354 { 00:08:34.354 "name": null, 00:08:34.354 "uuid": "ded5a2ae-4fd2-40c9-b962-e6bd318059f4", 00:08:34.354 "is_configured": false, 00:08:34.354 "data_offset": 0, 00:08:34.354 "data_size": 65536 00:08:34.354 }, 00:08:34.354 { 00:08:34.354 "name": null, 00:08:34.354 "uuid": "d544aca8-c836-458e-8d78-c52e91ad26f3", 00:08:34.354 "is_configured": false, 00:08:34.354 "data_offset": 0, 00:08:34.354 "data_size": 65536 00:08:34.354 }, 00:08:34.354 { 00:08:34.354 "name": "BaseBdev3", 00:08:34.354 "uuid": "13a69fdd-e711-46a5-9d5c-a03caeeaace8", 00:08:34.354 "is_configured": true, 00:08:34.354 "data_offset": 0, 00:08:34.354 "data_size": 65536 00:08:34.354 } 00:08:34.354 ] 00:08:34.354 }' 00:08:34.355 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.355 10:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.923 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.923 10:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.923 10:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.923 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:34.923 10:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.923 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:34.923 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:34.923 10:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.923 10:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.923 [2024-11-15 10:53:41.728063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:34.923 10:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.923 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:34.923 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.923 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.923 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.923 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.923 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.923 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.923 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.923 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.923 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.923 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.923 10:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.923 10:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.923 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.923 10:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.923 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.923 "name": "Existed_Raid", 00:08:34.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.923 "strip_size_kb": 64, 00:08:34.923 "state": "configuring", 00:08:34.923 "raid_level": "raid0", 00:08:34.923 "superblock": false, 00:08:34.923 "num_base_bdevs": 3, 00:08:34.923 "num_base_bdevs_discovered": 2, 00:08:34.923 "num_base_bdevs_operational": 3, 00:08:34.923 "base_bdevs_list": [ 00:08:34.923 { 00:08:34.923 "name": null, 00:08:34.923 "uuid": "ded5a2ae-4fd2-40c9-b962-e6bd318059f4", 00:08:34.923 "is_configured": false, 00:08:34.923 "data_offset": 0, 00:08:34.923 "data_size": 65536 00:08:34.923 }, 00:08:34.923 { 00:08:34.923 "name": "BaseBdev2", 00:08:34.923 "uuid": "d544aca8-c836-458e-8d78-c52e91ad26f3", 00:08:34.923 "is_configured": true, 00:08:34.924 "data_offset": 0, 00:08:34.924 "data_size": 65536 00:08:34.924 }, 00:08:34.924 { 00:08:34.924 "name": "BaseBdev3", 00:08:34.924 "uuid": "13a69fdd-e711-46a5-9d5c-a03caeeaace8", 00:08:34.924 "is_configured": true, 00:08:34.924 "data_offset": 0, 00:08:34.924 "data_size": 65536 00:08:34.924 } 00:08:34.924 ] 00:08:34.924 }' 00:08:34.924 10:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.924 10:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ded5a2ae-4fd2-40c9-b962-e6bd318059f4 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.491 [2024-11-15 10:53:42.302436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:35.491 [2024-11-15 10:53:42.302487] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:35.491 [2024-11-15 10:53:42.302497] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:35.491 [2024-11-15 10:53:42.302747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:35.491 [2024-11-15 10:53:42.302897] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:35.491 [2024-11-15 10:53:42.302907] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:35.491 [2024-11-15 10:53:42.303171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.491 NewBaseBdev 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.491 [ 00:08:35.491 { 00:08:35.491 "name": "NewBaseBdev", 00:08:35.491 "aliases": [ 00:08:35.491 "ded5a2ae-4fd2-40c9-b962-e6bd318059f4" 00:08:35.491 ], 00:08:35.491 "product_name": "Malloc disk", 00:08:35.491 "block_size": 512, 00:08:35.491 "num_blocks": 65536, 00:08:35.491 "uuid": "ded5a2ae-4fd2-40c9-b962-e6bd318059f4", 00:08:35.491 "assigned_rate_limits": { 00:08:35.491 "rw_ios_per_sec": 0, 00:08:35.491 "rw_mbytes_per_sec": 0, 00:08:35.491 "r_mbytes_per_sec": 0, 00:08:35.491 "w_mbytes_per_sec": 0 00:08:35.491 }, 00:08:35.491 "claimed": true, 00:08:35.491 "claim_type": "exclusive_write", 00:08:35.491 "zoned": false, 00:08:35.491 "supported_io_types": { 00:08:35.491 "read": true, 00:08:35.491 "write": true, 00:08:35.491 "unmap": true, 00:08:35.491 "flush": true, 00:08:35.491 "reset": true, 00:08:35.491 "nvme_admin": false, 00:08:35.491 "nvme_io": false, 00:08:35.491 "nvme_io_md": false, 00:08:35.491 "write_zeroes": true, 00:08:35.491 "zcopy": true, 00:08:35.491 "get_zone_info": false, 00:08:35.491 "zone_management": false, 00:08:35.491 "zone_append": false, 00:08:35.491 "compare": false, 00:08:35.491 "compare_and_write": false, 00:08:35.491 "abort": true, 00:08:35.491 "seek_hole": false, 00:08:35.491 "seek_data": false, 00:08:35.491 "copy": true, 00:08:35.491 "nvme_iov_md": false 00:08:35.491 }, 00:08:35.491 "memory_domains": [ 00:08:35.491 { 00:08:35.491 "dma_device_id": "system", 00:08:35.491 "dma_device_type": 1 00:08:35.491 }, 00:08:35.491 { 00:08:35.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.491 "dma_device_type": 2 00:08:35.491 } 00:08:35.491 ], 00:08:35.491 "driver_specific": {} 00:08:35.491 } 00:08:35.491 ] 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.491 "name": "Existed_Raid", 00:08:35.491 "uuid": "e5f9b2ba-39f1-4bf6-af19-24034ab457ca", 00:08:35.491 "strip_size_kb": 64, 00:08:35.491 "state": "online", 00:08:35.491 "raid_level": "raid0", 00:08:35.491 "superblock": false, 00:08:35.491 "num_base_bdevs": 3, 00:08:35.491 "num_base_bdevs_discovered": 3, 00:08:35.491 "num_base_bdevs_operational": 3, 00:08:35.491 "base_bdevs_list": [ 00:08:35.491 { 00:08:35.491 "name": "NewBaseBdev", 00:08:35.491 "uuid": "ded5a2ae-4fd2-40c9-b962-e6bd318059f4", 00:08:35.491 "is_configured": true, 00:08:35.491 "data_offset": 0, 00:08:35.491 "data_size": 65536 00:08:35.491 }, 00:08:35.491 { 00:08:35.491 "name": "BaseBdev2", 00:08:35.491 "uuid": "d544aca8-c836-458e-8d78-c52e91ad26f3", 00:08:35.491 "is_configured": true, 00:08:35.491 "data_offset": 0, 00:08:35.491 "data_size": 65536 00:08:35.491 }, 00:08:35.491 { 00:08:35.491 "name": "BaseBdev3", 00:08:35.491 "uuid": "13a69fdd-e711-46a5-9d5c-a03caeeaace8", 00:08:35.491 "is_configured": true, 00:08:35.491 "data_offset": 0, 00:08:35.491 "data_size": 65536 00:08:35.491 } 00:08:35.491 ] 00:08:35.491 }' 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.491 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.058 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:36.058 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:36.058 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:36.058 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:36.058 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:36.058 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:36.058 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:36.058 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:36.058 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.058 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.058 [2024-11-15 10:53:42.857839] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:36.058 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.058 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:36.058 "name": "Existed_Raid", 00:08:36.058 "aliases": [ 00:08:36.058 "e5f9b2ba-39f1-4bf6-af19-24034ab457ca" 00:08:36.058 ], 00:08:36.058 "product_name": "Raid Volume", 00:08:36.058 "block_size": 512, 00:08:36.058 "num_blocks": 196608, 00:08:36.058 "uuid": "e5f9b2ba-39f1-4bf6-af19-24034ab457ca", 00:08:36.058 "assigned_rate_limits": { 00:08:36.058 "rw_ios_per_sec": 0, 00:08:36.058 "rw_mbytes_per_sec": 0, 00:08:36.058 "r_mbytes_per_sec": 0, 00:08:36.058 "w_mbytes_per_sec": 0 00:08:36.058 }, 00:08:36.058 "claimed": false, 00:08:36.058 "zoned": false, 00:08:36.058 "supported_io_types": { 00:08:36.058 "read": true, 00:08:36.058 "write": true, 00:08:36.058 "unmap": true, 00:08:36.058 "flush": true, 00:08:36.058 "reset": true, 00:08:36.058 "nvme_admin": false, 00:08:36.058 "nvme_io": false, 00:08:36.058 "nvme_io_md": false, 00:08:36.058 "write_zeroes": true, 00:08:36.058 "zcopy": false, 00:08:36.058 "get_zone_info": false, 00:08:36.058 "zone_management": false, 00:08:36.058 "zone_append": false, 00:08:36.058 "compare": false, 00:08:36.058 "compare_and_write": false, 00:08:36.058 "abort": false, 00:08:36.058 "seek_hole": false, 00:08:36.058 "seek_data": false, 00:08:36.058 "copy": false, 00:08:36.058 "nvme_iov_md": false 00:08:36.058 }, 00:08:36.058 "memory_domains": [ 00:08:36.058 { 00:08:36.058 "dma_device_id": "system", 00:08:36.058 "dma_device_type": 1 00:08:36.058 }, 00:08:36.058 { 00:08:36.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.058 "dma_device_type": 2 00:08:36.058 }, 00:08:36.058 { 00:08:36.058 "dma_device_id": "system", 00:08:36.058 "dma_device_type": 1 00:08:36.058 }, 00:08:36.058 { 00:08:36.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.058 "dma_device_type": 2 00:08:36.058 }, 00:08:36.058 { 00:08:36.058 "dma_device_id": "system", 00:08:36.058 "dma_device_type": 1 00:08:36.058 }, 00:08:36.058 { 00:08:36.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.058 "dma_device_type": 2 00:08:36.058 } 00:08:36.058 ], 00:08:36.058 "driver_specific": { 00:08:36.058 "raid": { 00:08:36.058 "uuid": "e5f9b2ba-39f1-4bf6-af19-24034ab457ca", 00:08:36.058 "strip_size_kb": 64, 00:08:36.058 "state": "online", 00:08:36.058 "raid_level": "raid0", 00:08:36.058 "superblock": false, 00:08:36.058 "num_base_bdevs": 3, 00:08:36.058 "num_base_bdevs_discovered": 3, 00:08:36.058 "num_base_bdevs_operational": 3, 00:08:36.058 "base_bdevs_list": [ 00:08:36.058 { 00:08:36.058 "name": "NewBaseBdev", 00:08:36.058 "uuid": "ded5a2ae-4fd2-40c9-b962-e6bd318059f4", 00:08:36.058 "is_configured": true, 00:08:36.058 "data_offset": 0, 00:08:36.058 "data_size": 65536 00:08:36.058 }, 00:08:36.059 { 00:08:36.059 "name": "BaseBdev2", 00:08:36.059 "uuid": "d544aca8-c836-458e-8d78-c52e91ad26f3", 00:08:36.059 "is_configured": true, 00:08:36.059 "data_offset": 0, 00:08:36.059 "data_size": 65536 00:08:36.059 }, 00:08:36.059 { 00:08:36.059 "name": "BaseBdev3", 00:08:36.059 "uuid": "13a69fdd-e711-46a5-9d5c-a03caeeaace8", 00:08:36.059 "is_configured": true, 00:08:36.059 "data_offset": 0, 00:08:36.059 "data_size": 65536 00:08:36.059 } 00:08:36.059 ] 00:08:36.059 } 00:08:36.059 } 00:08:36.059 }' 00:08:36.059 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:36.059 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:36.059 BaseBdev2 00:08:36.059 BaseBdev3' 00:08:36.059 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.059 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:36.059 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.318 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.318 10:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:36.318 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.318 10:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.318 [2024-11-15 10:53:43.137092] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:36.318 [2024-11-15 10:53:43.137198] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:36.318 [2024-11-15 10:53:43.137337] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.318 [2024-11-15 10:53:43.137397] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:36.318 [2024-11-15 10:53:43.137409] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63957 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 63957 ']' 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 63957 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63957 00:08:36.318 killing process with pid 63957 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63957' 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 63957 00:08:36.318 [2024-11-15 10:53:43.184311] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:36.318 10:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 63957 00:08:36.576 [2024-11-15 10:53:43.498676] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:37.950 00:08:37.950 real 0m10.900s 00:08:37.950 user 0m17.331s 00:08:37.950 sys 0m1.966s 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:37.950 ************************************ 00:08:37.950 END TEST raid_state_function_test 00:08:37.950 ************************************ 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.950 10:53:44 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:37.950 10:53:44 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:37.950 10:53:44 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:37.950 10:53:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:37.950 ************************************ 00:08:37.950 START TEST raid_state_function_test_sb 00:08:37.950 ************************************ 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 true 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64585 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64585' 00:08:37.950 Process raid pid: 64585 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64585 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 64585 ']' 00:08:37.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:37.950 10:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.950 [2024-11-15 10:53:44.799504] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:08:37.950 [2024-11-15 10:53:44.799631] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.208 [2024-11-15 10:53:44.963660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.208 [2024-11-15 10:53:45.078767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.466 [2024-11-15 10:53:45.285988] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.466 [2024-11-15 10:53:45.286051] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.724 10:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:38.724 10:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:08:38.724 10:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:38.724 10:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.724 10:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.992 [2024-11-15 10:53:45.654574] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:38.992 [2024-11-15 10:53:45.654630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:38.992 [2024-11-15 10:53:45.654641] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:38.992 [2024-11-15 10:53:45.654651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:38.992 [2024-11-15 10:53:45.654657] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:38.992 [2024-11-15 10:53:45.654666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:38.992 10:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.992 10:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:38.992 10:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.992 10:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.992 10:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.992 10:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.992 10:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.992 10:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.992 10:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.992 10:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.992 10:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.992 10:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.992 10:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.992 10:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.992 10:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.992 10:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.992 10:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.992 "name": "Existed_Raid", 00:08:38.992 "uuid": "b633b2ca-e612-4c9d-ad10-45d7f6cc5553", 00:08:38.992 "strip_size_kb": 64, 00:08:38.992 "state": "configuring", 00:08:38.992 "raid_level": "raid0", 00:08:38.992 "superblock": true, 00:08:38.992 "num_base_bdevs": 3, 00:08:38.992 "num_base_bdevs_discovered": 0, 00:08:38.992 "num_base_bdevs_operational": 3, 00:08:38.992 "base_bdevs_list": [ 00:08:38.993 { 00:08:38.993 "name": "BaseBdev1", 00:08:38.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.993 "is_configured": false, 00:08:38.993 "data_offset": 0, 00:08:38.993 "data_size": 0 00:08:38.993 }, 00:08:38.993 { 00:08:38.993 "name": "BaseBdev2", 00:08:38.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.993 "is_configured": false, 00:08:38.993 "data_offset": 0, 00:08:38.993 "data_size": 0 00:08:38.993 }, 00:08:38.993 { 00:08:38.993 "name": "BaseBdev3", 00:08:38.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.993 "is_configured": false, 00:08:38.993 "data_offset": 0, 00:08:38.993 "data_size": 0 00:08:38.993 } 00:08:38.993 ] 00:08:38.993 }' 00:08:38.993 10:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.993 10:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.251 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:39.251 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.251 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.251 [2024-11-15 10:53:46.117753] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:39.251 [2024-11-15 10:53:46.117796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:39.251 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.251 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:39.251 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.251 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.251 [2024-11-15 10:53:46.125731] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:39.252 [2024-11-15 10:53:46.125826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:39.252 [2024-11-15 10:53:46.125858] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:39.252 [2024-11-15 10:53:46.125885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:39.252 [2024-11-15 10:53:46.125907] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:39.252 [2024-11-15 10:53:46.125931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:39.252 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.252 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:39.252 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.252 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.252 [2024-11-15 10:53:46.171092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:39.252 BaseBdev1 00:08:39.252 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.252 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:39.252 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:39.252 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:39.252 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:39.252 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:39.252 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:39.252 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:39.252 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.252 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.511 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.511 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:39.511 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.511 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.511 [ 00:08:39.511 { 00:08:39.511 "name": "BaseBdev1", 00:08:39.511 "aliases": [ 00:08:39.511 "666a5c8b-f084-4ef9-b774-c6da19e5b559" 00:08:39.511 ], 00:08:39.511 "product_name": "Malloc disk", 00:08:39.511 "block_size": 512, 00:08:39.511 "num_blocks": 65536, 00:08:39.511 "uuid": "666a5c8b-f084-4ef9-b774-c6da19e5b559", 00:08:39.511 "assigned_rate_limits": { 00:08:39.511 "rw_ios_per_sec": 0, 00:08:39.511 "rw_mbytes_per_sec": 0, 00:08:39.511 "r_mbytes_per_sec": 0, 00:08:39.511 "w_mbytes_per_sec": 0 00:08:39.511 }, 00:08:39.511 "claimed": true, 00:08:39.511 "claim_type": "exclusive_write", 00:08:39.511 "zoned": false, 00:08:39.511 "supported_io_types": { 00:08:39.511 "read": true, 00:08:39.511 "write": true, 00:08:39.511 "unmap": true, 00:08:39.511 "flush": true, 00:08:39.511 "reset": true, 00:08:39.511 "nvme_admin": false, 00:08:39.511 "nvme_io": false, 00:08:39.511 "nvme_io_md": false, 00:08:39.511 "write_zeroes": true, 00:08:39.511 "zcopy": true, 00:08:39.511 "get_zone_info": false, 00:08:39.511 "zone_management": false, 00:08:39.511 "zone_append": false, 00:08:39.511 "compare": false, 00:08:39.511 "compare_and_write": false, 00:08:39.511 "abort": true, 00:08:39.511 "seek_hole": false, 00:08:39.511 "seek_data": false, 00:08:39.511 "copy": true, 00:08:39.511 "nvme_iov_md": false 00:08:39.511 }, 00:08:39.511 "memory_domains": [ 00:08:39.511 { 00:08:39.511 "dma_device_id": "system", 00:08:39.511 "dma_device_type": 1 00:08:39.511 }, 00:08:39.511 { 00:08:39.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.511 "dma_device_type": 2 00:08:39.511 } 00:08:39.511 ], 00:08:39.511 "driver_specific": {} 00:08:39.511 } 00:08:39.511 ] 00:08:39.511 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.511 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:39.511 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:39.511 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.511 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.511 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.511 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.511 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.511 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.511 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.511 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.511 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.511 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.511 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.511 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.511 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.511 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.511 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.511 "name": "Existed_Raid", 00:08:39.511 "uuid": "f8f221b3-f784-46fb-80b9-9629886e3618", 00:08:39.511 "strip_size_kb": 64, 00:08:39.511 "state": "configuring", 00:08:39.511 "raid_level": "raid0", 00:08:39.511 "superblock": true, 00:08:39.511 "num_base_bdevs": 3, 00:08:39.511 "num_base_bdevs_discovered": 1, 00:08:39.511 "num_base_bdevs_operational": 3, 00:08:39.512 "base_bdevs_list": [ 00:08:39.512 { 00:08:39.512 "name": "BaseBdev1", 00:08:39.512 "uuid": "666a5c8b-f084-4ef9-b774-c6da19e5b559", 00:08:39.512 "is_configured": true, 00:08:39.512 "data_offset": 2048, 00:08:39.512 "data_size": 63488 00:08:39.512 }, 00:08:39.512 { 00:08:39.512 "name": "BaseBdev2", 00:08:39.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.512 "is_configured": false, 00:08:39.512 "data_offset": 0, 00:08:39.512 "data_size": 0 00:08:39.512 }, 00:08:39.512 { 00:08:39.512 "name": "BaseBdev3", 00:08:39.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.512 "is_configured": false, 00:08:39.512 "data_offset": 0, 00:08:39.512 "data_size": 0 00:08:39.512 } 00:08:39.512 ] 00:08:39.512 }' 00:08:39.512 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.512 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.770 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:39.770 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.770 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.770 [2024-11-15 10:53:46.650346] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:39.770 [2024-11-15 10:53:46.650402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:39.770 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.770 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:39.770 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.770 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.770 [2024-11-15 10:53:46.662399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:39.770 [2024-11-15 10:53:46.664395] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:39.770 [2024-11-15 10:53:46.664493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:39.770 [2024-11-15 10:53:46.664509] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:39.770 [2024-11-15 10:53:46.664520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:39.770 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.770 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:39.770 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:39.770 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:39.770 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.770 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.770 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.770 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.770 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.770 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.770 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.770 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.770 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.770 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.770 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.770 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.770 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.770 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.029 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.029 "name": "Existed_Raid", 00:08:40.029 "uuid": "ba9a524e-5bdf-438f-8d5b-bbad56558185", 00:08:40.029 "strip_size_kb": 64, 00:08:40.029 "state": "configuring", 00:08:40.029 "raid_level": "raid0", 00:08:40.029 "superblock": true, 00:08:40.029 "num_base_bdevs": 3, 00:08:40.029 "num_base_bdevs_discovered": 1, 00:08:40.029 "num_base_bdevs_operational": 3, 00:08:40.029 "base_bdevs_list": [ 00:08:40.029 { 00:08:40.029 "name": "BaseBdev1", 00:08:40.029 "uuid": "666a5c8b-f084-4ef9-b774-c6da19e5b559", 00:08:40.029 "is_configured": true, 00:08:40.029 "data_offset": 2048, 00:08:40.029 "data_size": 63488 00:08:40.029 }, 00:08:40.029 { 00:08:40.029 "name": "BaseBdev2", 00:08:40.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.029 "is_configured": false, 00:08:40.029 "data_offset": 0, 00:08:40.029 "data_size": 0 00:08:40.029 }, 00:08:40.029 { 00:08:40.029 "name": "BaseBdev3", 00:08:40.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.029 "is_configured": false, 00:08:40.029 "data_offset": 0, 00:08:40.029 "data_size": 0 00:08:40.029 } 00:08:40.029 ] 00:08:40.029 }' 00:08:40.029 10:53:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.029 10:53:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.287 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:40.287 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.287 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.287 [2024-11-15 10:53:47.190389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:40.287 BaseBdev2 00:08:40.287 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.287 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:40.287 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:40.287 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:40.287 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:40.287 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:40.287 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:40.287 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:40.287 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.287 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.287 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.287 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:40.287 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.287 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.545 [ 00:08:40.545 { 00:08:40.545 "name": "BaseBdev2", 00:08:40.545 "aliases": [ 00:08:40.546 "5b8cea49-3120-4c98-b108-14db37ad31e1" 00:08:40.546 ], 00:08:40.546 "product_name": "Malloc disk", 00:08:40.546 "block_size": 512, 00:08:40.546 "num_blocks": 65536, 00:08:40.546 "uuid": "5b8cea49-3120-4c98-b108-14db37ad31e1", 00:08:40.546 "assigned_rate_limits": { 00:08:40.546 "rw_ios_per_sec": 0, 00:08:40.546 "rw_mbytes_per_sec": 0, 00:08:40.546 "r_mbytes_per_sec": 0, 00:08:40.546 "w_mbytes_per_sec": 0 00:08:40.546 }, 00:08:40.546 "claimed": true, 00:08:40.546 "claim_type": "exclusive_write", 00:08:40.546 "zoned": false, 00:08:40.546 "supported_io_types": { 00:08:40.546 "read": true, 00:08:40.546 "write": true, 00:08:40.546 "unmap": true, 00:08:40.546 "flush": true, 00:08:40.546 "reset": true, 00:08:40.546 "nvme_admin": false, 00:08:40.546 "nvme_io": false, 00:08:40.546 "nvme_io_md": false, 00:08:40.546 "write_zeroes": true, 00:08:40.546 "zcopy": true, 00:08:40.546 "get_zone_info": false, 00:08:40.546 "zone_management": false, 00:08:40.546 "zone_append": false, 00:08:40.546 "compare": false, 00:08:40.546 "compare_and_write": false, 00:08:40.546 "abort": true, 00:08:40.546 "seek_hole": false, 00:08:40.546 "seek_data": false, 00:08:40.546 "copy": true, 00:08:40.546 "nvme_iov_md": false 00:08:40.546 }, 00:08:40.546 "memory_domains": [ 00:08:40.546 { 00:08:40.546 "dma_device_id": "system", 00:08:40.546 "dma_device_type": 1 00:08:40.546 }, 00:08:40.546 { 00:08:40.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.546 "dma_device_type": 2 00:08:40.546 } 00:08:40.546 ], 00:08:40.546 "driver_specific": {} 00:08:40.546 } 00:08:40.546 ] 00:08:40.546 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.546 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:40.546 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:40.546 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:40.546 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:40.546 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.546 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.546 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.546 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.546 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.546 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.546 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.546 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.546 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.546 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.546 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.546 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.546 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.546 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.546 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.546 "name": "Existed_Raid", 00:08:40.546 "uuid": "ba9a524e-5bdf-438f-8d5b-bbad56558185", 00:08:40.546 "strip_size_kb": 64, 00:08:40.546 "state": "configuring", 00:08:40.546 "raid_level": "raid0", 00:08:40.546 "superblock": true, 00:08:40.546 "num_base_bdevs": 3, 00:08:40.546 "num_base_bdevs_discovered": 2, 00:08:40.546 "num_base_bdevs_operational": 3, 00:08:40.546 "base_bdevs_list": [ 00:08:40.546 { 00:08:40.546 "name": "BaseBdev1", 00:08:40.546 "uuid": "666a5c8b-f084-4ef9-b774-c6da19e5b559", 00:08:40.546 "is_configured": true, 00:08:40.546 "data_offset": 2048, 00:08:40.546 "data_size": 63488 00:08:40.546 }, 00:08:40.546 { 00:08:40.546 "name": "BaseBdev2", 00:08:40.546 "uuid": "5b8cea49-3120-4c98-b108-14db37ad31e1", 00:08:40.546 "is_configured": true, 00:08:40.546 "data_offset": 2048, 00:08:40.546 "data_size": 63488 00:08:40.546 }, 00:08:40.546 { 00:08:40.546 "name": "BaseBdev3", 00:08:40.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.546 "is_configured": false, 00:08:40.546 "data_offset": 0, 00:08:40.546 "data_size": 0 00:08:40.546 } 00:08:40.546 ] 00:08:40.546 }' 00:08:40.546 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.546 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.805 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:40.805 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.805 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.805 [2024-11-15 10:53:47.705380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:40.805 [2024-11-15 10:53:47.705758] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:40.805 [2024-11-15 10:53:47.705784] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:40.805 [2024-11-15 10:53:47.706060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:40.805 [2024-11-15 10:53:47.706208] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:40.805 [2024-11-15 10:53:47.706216] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:40.805 BaseBdev3 00:08:40.805 [2024-11-15 10:53:47.706383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.805 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.805 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:40.805 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:08:40.805 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:40.805 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:40.805 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:40.805 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:40.805 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:40.805 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.805 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.805 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.805 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:40.805 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.805 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.805 [ 00:08:40.805 { 00:08:40.805 "name": "BaseBdev3", 00:08:40.805 "aliases": [ 00:08:40.805 "d83a73e2-94fb-4413-8f50-5d26d058ef38" 00:08:40.805 ], 00:08:40.805 "product_name": "Malloc disk", 00:08:40.805 "block_size": 512, 00:08:40.805 "num_blocks": 65536, 00:08:40.805 "uuid": "d83a73e2-94fb-4413-8f50-5d26d058ef38", 00:08:40.805 "assigned_rate_limits": { 00:08:40.805 "rw_ios_per_sec": 0, 00:08:40.805 "rw_mbytes_per_sec": 0, 00:08:40.805 "r_mbytes_per_sec": 0, 00:08:40.805 "w_mbytes_per_sec": 0 00:08:40.805 }, 00:08:40.805 "claimed": true, 00:08:40.805 "claim_type": "exclusive_write", 00:08:40.805 "zoned": false, 00:08:40.805 "supported_io_types": { 00:08:40.805 "read": true, 00:08:40.805 "write": true, 00:08:40.805 "unmap": true, 00:08:40.805 "flush": true, 00:08:40.805 "reset": true, 00:08:40.805 "nvme_admin": false, 00:08:40.805 "nvme_io": false, 00:08:40.805 "nvme_io_md": false, 00:08:40.805 "write_zeroes": true, 00:08:40.805 "zcopy": true, 00:08:40.805 "get_zone_info": false, 00:08:40.805 "zone_management": false, 00:08:40.805 "zone_append": false, 00:08:40.805 "compare": false, 00:08:40.805 "compare_and_write": false, 00:08:40.805 "abort": true, 00:08:40.805 "seek_hole": false, 00:08:40.805 "seek_data": false, 00:08:40.805 "copy": true, 00:08:40.805 "nvme_iov_md": false 00:08:40.805 }, 00:08:40.805 "memory_domains": [ 00:08:40.805 { 00:08:40.805 "dma_device_id": "system", 00:08:40.805 "dma_device_type": 1 00:08:40.805 }, 00:08:41.064 { 00:08:41.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.064 "dma_device_type": 2 00:08:41.064 } 00:08:41.064 ], 00:08:41.064 "driver_specific": {} 00:08:41.064 } 00:08:41.064 ] 00:08:41.064 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.064 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:41.064 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:41.064 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:41.064 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:41.064 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.064 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.064 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.064 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.064 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.064 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.064 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.064 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.064 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.064 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.064 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.064 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.064 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.064 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.064 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.064 "name": "Existed_Raid", 00:08:41.064 "uuid": "ba9a524e-5bdf-438f-8d5b-bbad56558185", 00:08:41.064 "strip_size_kb": 64, 00:08:41.064 "state": "online", 00:08:41.064 "raid_level": "raid0", 00:08:41.064 "superblock": true, 00:08:41.064 "num_base_bdevs": 3, 00:08:41.064 "num_base_bdevs_discovered": 3, 00:08:41.064 "num_base_bdevs_operational": 3, 00:08:41.064 "base_bdevs_list": [ 00:08:41.064 { 00:08:41.064 "name": "BaseBdev1", 00:08:41.064 "uuid": "666a5c8b-f084-4ef9-b774-c6da19e5b559", 00:08:41.064 "is_configured": true, 00:08:41.064 "data_offset": 2048, 00:08:41.064 "data_size": 63488 00:08:41.064 }, 00:08:41.064 { 00:08:41.064 "name": "BaseBdev2", 00:08:41.064 "uuid": "5b8cea49-3120-4c98-b108-14db37ad31e1", 00:08:41.064 "is_configured": true, 00:08:41.064 "data_offset": 2048, 00:08:41.064 "data_size": 63488 00:08:41.065 }, 00:08:41.065 { 00:08:41.065 "name": "BaseBdev3", 00:08:41.065 "uuid": "d83a73e2-94fb-4413-8f50-5d26d058ef38", 00:08:41.065 "is_configured": true, 00:08:41.065 "data_offset": 2048, 00:08:41.065 "data_size": 63488 00:08:41.065 } 00:08:41.065 ] 00:08:41.065 }' 00:08:41.065 10:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.065 10:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.323 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:41.323 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:41.323 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:41.323 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:41.323 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:41.323 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:41.323 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:41.323 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:41.323 10:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.323 10:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.323 [2024-11-15 10:53:48.188911] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.323 10:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.323 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:41.323 "name": "Existed_Raid", 00:08:41.323 "aliases": [ 00:08:41.323 "ba9a524e-5bdf-438f-8d5b-bbad56558185" 00:08:41.323 ], 00:08:41.323 "product_name": "Raid Volume", 00:08:41.323 "block_size": 512, 00:08:41.323 "num_blocks": 190464, 00:08:41.323 "uuid": "ba9a524e-5bdf-438f-8d5b-bbad56558185", 00:08:41.323 "assigned_rate_limits": { 00:08:41.323 "rw_ios_per_sec": 0, 00:08:41.323 "rw_mbytes_per_sec": 0, 00:08:41.323 "r_mbytes_per_sec": 0, 00:08:41.323 "w_mbytes_per_sec": 0 00:08:41.323 }, 00:08:41.323 "claimed": false, 00:08:41.323 "zoned": false, 00:08:41.323 "supported_io_types": { 00:08:41.323 "read": true, 00:08:41.323 "write": true, 00:08:41.323 "unmap": true, 00:08:41.323 "flush": true, 00:08:41.323 "reset": true, 00:08:41.323 "nvme_admin": false, 00:08:41.323 "nvme_io": false, 00:08:41.323 "nvme_io_md": false, 00:08:41.323 "write_zeroes": true, 00:08:41.323 "zcopy": false, 00:08:41.323 "get_zone_info": false, 00:08:41.323 "zone_management": false, 00:08:41.323 "zone_append": false, 00:08:41.323 "compare": false, 00:08:41.323 "compare_and_write": false, 00:08:41.323 "abort": false, 00:08:41.323 "seek_hole": false, 00:08:41.323 "seek_data": false, 00:08:41.323 "copy": false, 00:08:41.323 "nvme_iov_md": false 00:08:41.323 }, 00:08:41.323 "memory_domains": [ 00:08:41.323 { 00:08:41.323 "dma_device_id": "system", 00:08:41.323 "dma_device_type": 1 00:08:41.323 }, 00:08:41.323 { 00:08:41.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.323 "dma_device_type": 2 00:08:41.323 }, 00:08:41.323 { 00:08:41.323 "dma_device_id": "system", 00:08:41.323 "dma_device_type": 1 00:08:41.323 }, 00:08:41.323 { 00:08:41.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.323 "dma_device_type": 2 00:08:41.323 }, 00:08:41.323 { 00:08:41.323 "dma_device_id": "system", 00:08:41.323 "dma_device_type": 1 00:08:41.323 }, 00:08:41.323 { 00:08:41.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.323 "dma_device_type": 2 00:08:41.323 } 00:08:41.323 ], 00:08:41.323 "driver_specific": { 00:08:41.324 "raid": { 00:08:41.324 "uuid": "ba9a524e-5bdf-438f-8d5b-bbad56558185", 00:08:41.324 "strip_size_kb": 64, 00:08:41.324 "state": "online", 00:08:41.324 "raid_level": "raid0", 00:08:41.324 "superblock": true, 00:08:41.324 "num_base_bdevs": 3, 00:08:41.324 "num_base_bdevs_discovered": 3, 00:08:41.324 "num_base_bdevs_operational": 3, 00:08:41.324 "base_bdevs_list": [ 00:08:41.324 { 00:08:41.324 "name": "BaseBdev1", 00:08:41.324 "uuid": "666a5c8b-f084-4ef9-b774-c6da19e5b559", 00:08:41.324 "is_configured": true, 00:08:41.324 "data_offset": 2048, 00:08:41.324 "data_size": 63488 00:08:41.324 }, 00:08:41.324 { 00:08:41.324 "name": "BaseBdev2", 00:08:41.324 "uuid": "5b8cea49-3120-4c98-b108-14db37ad31e1", 00:08:41.324 "is_configured": true, 00:08:41.324 "data_offset": 2048, 00:08:41.324 "data_size": 63488 00:08:41.324 }, 00:08:41.324 { 00:08:41.324 "name": "BaseBdev3", 00:08:41.324 "uuid": "d83a73e2-94fb-4413-8f50-5d26d058ef38", 00:08:41.324 "is_configured": true, 00:08:41.324 "data_offset": 2048, 00:08:41.324 "data_size": 63488 00:08:41.324 } 00:08:41.324 ] 00:08:41.324 } 00:08:41.324 } 00:08:41.324 }' 00:08:41.324 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:41.582 BaseBdev2 00:08:41.582 BaseBdev3' 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.582 10:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.582 [2024-11-15 10:53:48.460224] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:41.582 [2024-11-15 10:53:48.460268] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.582 [2024-11-15 10:53:48.460336] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.840 10:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.840 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:41.840 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:41.840 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:41.840 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:41.840 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:41.840 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:41.840 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.840 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:41.840 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.840 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.840 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:41.840 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.840 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.840 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.840 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.840 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.840 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.840 10:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.840 10:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.840 10:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.840 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.840 "name": "Existed_Raid", 00:08:41.840 "uuid": "ba9a524e-5bdf-438f-8d5b-bbad56558185", 00:08:41.840 "strip_size_kb": 64, 00:08:41.840 "state": "offline", 00:08:41.840 "raid_level": "raid0", 00:08:41.840 "superblock": true, 00:08:41.840 "num_base_bdevs": 3, 00:08:41.840 "num_base_bdevs_discovered": 2, 00:08:41.840 "num_base_bdevs_operational": 2, 00:08:41.840 "base_bdevs_list": [ 00:08:41.840 { 00:08:41.840 "name": null, 00:08:41.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.840 "is_configured": false, 00:08:41.840 "data_offset": 0, 00:08:41.840 "data_size": 63488 00:08:41.840 }, 00:08:41.840 { 00:08:41.840 "name": "BaseBdev2", 00:08:41.840 "uuid": "5b8cea49-3120-4c98-b108-14db37ad31e1", 00:08:41.840 "is_configured": true, 00:08:41.840 "data_offset": 2048, 00:08:41.840 "data_size": 63488 00:08:41.840 }, 00:08:41.840 { 00:08:41.840 "name": "BaseBdev3", 00:08:41.840 "uuid": "d83a73e2-94fb-4413-8f50-5d26d058ef38", 00:08:41.840 "is_configured": true, 00:08:41.840 "data_offset": 2048, 00:08:41.840 "data_size": 63488 00:08:41.840 } 00:08:41.840 ] 00:08:41.840 }' 00:08:41.840 10:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.840 10:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.097 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:42.097 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:42.097 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:42.097 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.097 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.097 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.356 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.356 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:42.356 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:42.356 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:42.356 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.356 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.356 [2024-11-15 10:53:49.048814] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:42.356 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.356 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:42.356 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:42.356 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.356 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:42.356 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.356 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.356 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.356 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:42.356 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:42.356 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:42.356 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.356 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.356 [2024-11-15 10:53:49.200647] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:42.356 [2024-11-15 10:53:49.200803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.619 BaseBdev2 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.619 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.619 [ 00:08:42.619 { 00:08:42.619 "name": "BaseBdev2", 00:08:42.619 "aliases": [ 00:08:42.619 "7b718db6-afa9-4287-bc29-d2d6079a35b5" 00:08:42.619 ], 00:08:42.619 "product_name": "Malloc disk", 00:08:42.619 "block_size": 512, 00:08:42.619 "num_blocks": 65536, 00:08:42.619 "uuid": "7b718db6-afa9-4287-bc29-d2d6079a35b5", 00:08:42.619 "assigned_rate_limits": { 00:08:42.619 "rw_ios_per_sec": 0, 00:08:42.619 "rw_mbytes_per_sec": 0, 00:08:42.619 "r_mbytes_per_sec": 0, 00:08:42.619 "w_mbytes_per_sec": 0 00:08:42.620 }, 00:08:42.620 "claimed": false, 00:08:42.620 "zoned": false, 00:08:42.620 "supported_io_types": { 00:08:42.620 "read": true, 00:08:42.620 "write": true, 00:08:42.620 "unmap": true, 00:08:42.620 "flush": true, 00:08:42.620 "reset": true, 00:08:42.620 "nvme_admin": false, 00:08:42.620 "nvme_io": false, 00:08:42.620 "nvme_io_md": false, 00:08:42.620 "write_zeroes": true, 00:08:42.620 "zcopy": true, 00:08:42.620 "get_zone_info": false, 00:08:42.620 "zone_management": false, 00:08:42.620 "zone_append": false, 00:08:42.620 "compare": false, 00:08:42.620 "compare_and_write": false, 00:08:42.620 "abort": true, 00:08:42.620 "seek_hole": false, 00:08:42.620 "seek_data": false, 00:08:42.620 "copy": true, 00:08:42.620 "nvme_iov_md": false 00:08:42.620 }, 00:08:42.620 "memory_domains": [ 00:08:42.620 { 00:08:42.620 "dma_device_id": "system", 00:08:42.620 "dma_device_type": 1 00:08:42.620 }, 00:08:42.620 { 00:08:42.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.620 "dma_device_type": 2 00:08:42.620 } 00:08:42.620 ], 00:08:42.620 "driver_specific": {} 00:08:42.620 } 00:08:42.620 ] 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.620 BaseBdev3 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.620 [ 00:08:42.620 { 00:08:42.620 "name": "BaseBdev3", 00:08:42.620 "aliases": [ 00:08:42.620 "5f311193-0b1d-4054-919a-c9a27f52718b" 00:08:42.620 ], 00:08:42.620 "product_name": "Malloc disk", 00:08:42.620 "block_size": 512, 00:08:42.620 "num_blocks": 65536, 00:08:42.620 "uuid": "5f311193-0b1d-4054-919a-c9a27f52718b", 00:08:42.620 "assigned_rate_limits": { 00:08:42.620 "rw_ios_per_sec": 0, 00:08:42.620 "rw_mbytes_per_sec": 0, 00:08:42.620 "r_mbytes_per_sec": 0, 00:08:42.620 "w_mbytes_per_sec": 0 00:08:42.620 }, 00:08:42.620 "claimed": false, 00:08:42.620 "zoned": false, 00:08:42.620 "supported_io_types": { 00:08:42.620 "read": true, 00:08:42.620 "write": true, 00:08:42.620 "unmap": true, 00:08:42.620 "flush": true, 00:08:42.620 "reset": true, 00:08:42.620 "nvme_admin": false, 00:08:42.620 "nvme_io": false, 00:08:42.620 "nvme_io_md": false, 00:08:42.620 "write_zeroes": true, 00:08:42.620 "zcopy": true, 00:08:42.620 "get_zone_info": false, 00:08:42.620 "zone_management": false, 00:08:42.620 "zone_append": false, 00:08:42.620 "compare": false, 00:08:42.620 "compare_and_write": false, 00:08:42.620 "abort": true, 00:08:42.620 "seek_hole": false, 00:08:42.620 "seek_data": false, 00:08:42.620 "copy": true, 00:08:42.620 "nvme_iov_md": false 00:08:42.620 }, 00:08:42.620 "memory_domains": [ 00:08:42.620 { 00:08:42.620 "dma_device_id": "system", 00:08:42.620 "dma_device_type": 1 00:08:42.620 }, 00:08:42.620 { 00:08:42.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.620 "dma_device_type": 2 00:08:42.620 } 00:08:42.620 ], 00:08:42.620 "driver_specific": {} 00:08:42.620 } 00:08:42.620 ] 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.620 [2024-11-15 10:53:49.504880] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:42.620 [2024-11-15 10:53:49.505036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:42.620 [2024-11-15 10:53:49.505120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:42.620 [2024-11-15 10:53:49.506981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.620 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.879 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.879 "name": "Existed_Raid", 00:08:42.879 "uuid": "f01babce-4325-4ca2-9b3c-8db3e9904ec9", 00:08:42.879 "strip_size_kb": 64, 00:08:42.879 "state": "configuring", 00:08:42.879 "raid_level": "raid0", 00:08:42.879 "superblock": true, 00:08:42.879 "num_base_bdevs": 3, 00:08:42.879 "num_base_bdevs_discovered": 2, 00:08:42.879 "num_base_bdevs_operational": 3, 00:08:42.879 "base_bdevs_list": [ 00:08:42.879 { 00:08:42.879 "name": "BaseBdev1", 00:08:42.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.879 "is_configured": false, 00:08:42.879 "data_offset": 0, 00:08:42.879 "data_size": 0 00:08:42.879 }, 00:08:42.879 { 00:08:42.879 "name": "BaseBdev2", 00:08:42.879 "uuid": "7b718db6-afa9-4287-bc29-d2d6079a35b5", 00:08:42.879 "is_configured": true, 00:08:42.879 "data_offset": 2048, 00:08:42.879 "data_size": 63488 00:08:42.879 }, 00:08:42.879 { 00:08:42.879 "name": "BaseBdev3", 00:08:42.879 "uuid": "5f311193-0b1d-4054-919a-c9a27f52718b", 00:08:42.879 "is_configured": true, 00:08:42.879 "data_offset": 2048, 00:08:42.879 "data_size": 63488 00:08:42.879 } 00:08:42.879 ] 00:08:42.879 }' 00:08:42.879 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.879 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.137 10:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:43.137 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.137 10:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.138 [2024-11-15 10:53:50.000053] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:43.138 10:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.138 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:43.138 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.138 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.138 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.138 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.138 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.138 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.138 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.138 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.138 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.138 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.138 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.138 10:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.138 10:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.138 10:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.138 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.138 "name": "Existed_Raid", 00:08:43.138 "uuid": "f01babce-4325-4ca2-9b3c-8db3e9904ec9", 00:08:43.138 "strip_size_kb": 64, 00:08:43.138 "state": "configuring", 00:08:43.138 "raid_level": "raid0", 00:08:43.138 "superblock": true, 00:08:43.138 "num_base_bdevs": 3, 00:08:43.138 "num_base_bdevs_discovered": 1, 00:08:43.138 "num_base_bdevs_operational": 3, 00:08:43.138 "base_bdevs_list": [ 00:08:43.138 { 00:08:43.138 "name": "BaseBdev1", 00:08:43.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.138 "is_configured": false, 00:08:43.138 "data_offset": 0, 00:08:43.138 "data_size": 0 00:08:43.138 }, 00:08:43.138 { 00:08:43.138 "name": null, 00:08:43.138 "uuid": "7b718db6-afa9-4287-bc29-d2d6079a35b5", 00:08:43.138 "is_configured": false, 00:08:43.138 "data_offset": 0, 00:08:43.138 "data_size": 63488 00:08:43.138 }, 00:08:43.138 { 00:08:43.138 "name": "BaseBdev3", 00:08:43.138 "uuid": "5f311193-0b1d-4054-919a-c9a27f52718b", 00:08:43.138 "is_configured": true, 00:08:43.138 "data_offset": 2048, 00:08:43.138 "data_size": 63488 00:08:43.138 } 00:08:43.138 ] 00:08:43.138 }' 00:08:43.138 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.138 10:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.705 [2024-11-15 10:53:50.568775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:43.705 BaseBdev1 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.705 [ 00:08:43.705 { 00:08:43.705 "name": "BaseBdev1", 00:08:43.705 "aliases": [ 00:08:43.705 "72bc65bb-9fac-4958-8a18-368976b29add" 00:08:43.705 ], 00:08:43.705 "product_name": "Malloc disk", 00:08:43.705 "block_size": 512, 00:08:43.705 "num_blocks": 65536, 00:08:43.705 "uuid": "72bc65bb-9fac-4958-8a18-368976b29add", 00:08:43.705 "assigned_rate_limits": { 00:08:43.705 "rw_ios_per_sec": 0, 00:08:43.705 "rw_mbytes_per_sec": 0, 00:08:43.705 "r_mbytes_per_sec": 0, 00:08:43.705 "w_mbytes_per_sec": 0 00:08:43.705 }, 00:08:43.705 "claimed": true, 00:08:43.705 "claim_type": "exclusive_write", 00:08:43.705 "zoned": false, 00:08:43.705 "supported_io_types": { 00:08:43.705 "read": true, 00:08:43.705 "write": true, 00:08:43.705 "unmap": true, 00:08:43.705 "flush": true, 00:08:43.705 "reset": true, 00:08:43.705 "nvme_admin": false, 00:08:43.705 "nvme_io": false, 00:08:43.705 "nvme_io_md": false, 00:08:43.705 "write_zeroes": true, 00:08:43.705 "zcopy": true, 00:08:43.705 "get_zone_info": false, 00:08:43.705 "zone_management": false, 00:08:43.705 "zone_append": false, 00:08:43.705 "compare": false, 00:08:43.705 "compare_and_write": false, 00:08:43.705 "abort": true, 00:08:43.705 "seek_hole": false, 00:08:43.705 "seek_data": false, 00:08:43.705 "copy": true, 00:08:43.705 "nvme_iov_md": false 00:08:43.705 }, 00:08:43.705 "memory_domains": [ 00:08:43.705 { 00:08:43.705 "dma_device_id": "system", 00:08:43.705 "dma_device_type": 1 00:08:43.705 }, 00:08:43.705 { 00:08:43.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.705 "dma_device_type": 2 00:08:43.705 } 00:08:43.705 ], 00:08:43.705 "driver_specific": {} 00:08:43.705 } 00:08:43.705 ] 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.705 10:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.964 10:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.964 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.964 "name": "Existed_Raid", 00:08:43.964 "uuid": "f01babce-4325-4ca2-9b3c-8db3e9904ec9", 00:08:43.964 "strip_size_kb": 64, 00:08:43.964 "state": "configuring", 00:08:43.964 "raid_level": "raid0", 00:08:43.964 "superblock": true, 00:08:43.964 "num_base_bdevs": 3, 00:08:43.964 "num_base_bdevs_discovered": 2, 00:08:43.964 "num_base_bdevs_operational": 3, 00:08:43.964 "base_bdevs_list": [ 00:08:43.964 { 00:08:43.964 "name": "BaseBdev1", 00:08:43.964 "uuid": "72bc65bb-9fac-4958-8a18-368976b29add", 00:08:43.964 "is_configured": true, 00:08:43.964 "data_offset": 2048, 00:08:43.964 "data_size": 63488 00:08:43.964 }, 00:08:43.964 { 00:08:43.964 "name": null, 00:08:43.964 "uuid": "7b718db6-afa9-4287-bc29-d2d6079a35b5", 00:08:43.964 "is_configured": false, 00:08:43.964 "data_offset": 0, 00:08:43.964 "data_size": 63488 00:08:43.964 }, 00:08:43.964 { 00:08:43.964 "name": "BaseBdev3", 00:08:43.964 "uuid": "5f311193-0b1d-4054-919a-c9a27f52718b", 00:08:43.964 "is_configured": true, 00:08:43.964 "data_offset": 2048, 00:08:43.964 "data_size": 63488 00:08:43.964 } 00:08:43.964 ] 00:08:43.964 }' 00:08:43.964 10:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.964 10:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.223 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.223 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:44.223 10:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.223 10:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.223 10:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.223 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:44.223 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:44.223 10:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.223 10:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.223 [2024-11-15 10:53:51.092072] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:44.223 10:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.223 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:44.223 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.223 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.223 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.223 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.223 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.223 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.223 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.223 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.223 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.223 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.223 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.223 10:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.223 10:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.223 10:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.481 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.481 "name": "Existed_Raid", 00:08:44.481 "uuid": "f01babce-4325-4ca2-9b3c-8db3e9904ec9", 00:08:44.481 "strip_size_kb": 64, 00:08:44.481 "state": "configuring", 00:08:44.481 "raid_level": "raid0", 00:08:44.481 "superblock": true, 00:08:44.481 "num_base_bdevs": 3, 00:08:44.481 "num_base_bdevs_discovered": 1, 00:08:44.481 "num_base_bdevs_operational": 3, 00:08:44.481 "base_bdevs_list": [ 00:08:44.481 { 00:08:44.481 "name": "BaseBdev1", 00:08:44.481 "uuid": "72bc65bb-9fac-4958-8a18-368976b29add", 00:08:44.481 "is_configured": true, 00:08:44.482 "data_offset": 2048, 00:08:44.482 "data_size": 63488 00:08:44.482 }, 00:08:44.482 { 00:08:44.482 "name": null, 00:08:44.482 "uuid": "7b718db6-afa9-4287-bc29-d2d6079a35b5", 00:08:44.482 "is_configured": false, 00:08:44.482 "data_offset": 0, 00:08:44.482 "data_size": 63488 00:08:44.482 }, 00:08:44.482 { 00:08:44.482 "name": null, 00:08:44.482 "uuid": "5f311193-0b1d-4054-919a-c9a27f52718b", 00:08:44.482 "is_configured": false, 00:08:44.482 "data_offset": 0, 00:08:44.482 "data_size": 63488 00:08:44.482 } 00:08:44.482 ] 00:08:44.482 }' 00:08:44.482 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.482 10:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.740 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:44.740 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.740 10:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.740 10:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.740 10:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.740 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:44.740 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:44.740 10:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.740 10:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.740 [2024-11-15 10:53:51.567294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:44.740 10:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.740 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:44.740 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.740 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.740 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.740 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.740 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.740 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.740 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.740 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.740 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.740 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.740 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.740 10:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.740 10:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.740 10:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.740 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.740 "name": "Existed_Raid", 00:08:44.740 "uuid": "f01babce-4325-4ca2-9b3c-8db3e9904ec9", 00:08:44.740 "strip_size_kb": 64, 00:08:44.740 "state": "configuring", 00:08:44.740 "raid_level": "raid0", 00:08:44.740 "superblock": true, 00:08:44.740 "num_base_bdevs": 3, 00:08:44.740 "num_base_bdevs_discovered": 2, 00:08:44.740 "num_base_bdevs_operational": 3, 00:08:44.740 "base_bdevs_list": [ 00:08:44.740 { 00:08:44.740 "name": "BaseBdev1", 00:08:44.740 "uuid": "72bc65bb-9fac-4958-8a18-368976b29add", 00:08:44.740 "is_configured": true, 00:08:44.740 "data_offset": 2048, 00:08:44.740 "data_size": 63488 00:08:44.740 }, 00:08:44.740 { 00:08:44.740 "name": null, 00:08:44.740 "uuid": "7b718db6-afa9-4287-bc29-d2d6079a35b5", 00:08:44.740 "is_configured": false, 00:08:44.740 "data_offset": 0, 00:08:44.740 "data_size": 63488 00:08:44.740 }, 00:08:44.740 { 00:08:44.740 "name": "BaseBdev3", 00:08:44.740 "uuid": "5f311193-0b1d-4054-919a-c9a27f52718b", 00:08:44.740 "is_configured": true, 00:08:44.740 "data_offset": 2048, 00:08:44.740 "data_size": 63488 00:08:44.740 } 00:08:44.740 ] 00:08:44.740 }' 00:08:44.740 10:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.741 10:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.307 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.307 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:45.307 10:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.307 10:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.307 10:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.307 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:45.307 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:45.307 10:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.307 10:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.307 [2024-11-15 10:53:52.086462] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:45.307 10:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.307 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:45.307 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.307 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.307 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:45.307 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.307 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.307 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.307 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.307 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.307 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.307 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.307 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.308 10:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.308 10:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.308 10:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.566 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.566 "name": "Existed_Raid", 00:08:45.566 "uuid": "f01babce-4325-4ca2-9b3c-8db3e9904ec9", 00:08:45.566 "strip_size_kb": 64, 00:08:45.566 "state": "configuring", 00:08:45.566 "raid_level": "raid0", 00:08:45.566 "superblock": true, 00:08:45.566 "num_base_bdevs": 3, 00:08:45.566 "num_base_bdevs_discovered": 1, 00:08:45.566 "num_base_bdevs_operational": 3, 00:08:45.566 "base_bdevs_list": [ 00:08:45.566 { 00:08:45.566 "name": null, 00:08:45.566 "uuid": "72bc65bb-9fac-4958-8a18-368976b29add", 00:08:45.566 "is_configured": false, 00:08:45.566 "data_offset": 0, 00:08:45.566 "data_size": 63488 00:08:45.566 }, 00:08:45.566 { 00:08:45.566 "name": null, 00:08:45.566 "uuid": "7b718db6-afa9-4287-bc29-d2d6079a35b5", 00:08:45.566 "is_configured": false, 00:08:45.566 "data_offset": 0, 00:08:45.566 "data_size": 63488 00:08:45.566 }, 00:08:45.566 { 00:08:45.566 "name": "BaseBdev3", 00:08:45.566 "uuid": "5f311193-0b1d-4054-919a-c9a27f52718b", 00:08:45.566 "is_configured": true, 00:08:45.566 "data_offset": 2048, 00:08:45.566 "data_size": 63488 00:08:45.566 } 00:08:45.566 ] 00:08:45.566 }' 00:08:45.566 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.566 10:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.824 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.824 10:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.824 10:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.824 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:45.824 10:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.824 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:45.824 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:45.824 10:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.824 10:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.824 [2024-11-15 10:53:52.711610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:45.824 10:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.824 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:45.824 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.824 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.824 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:45.824 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.824 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.824 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.824 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.824 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.824 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.824 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.824 10:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.824 10:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.824 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.824 10:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.083 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.083 "name": "Existed_Raid", 00:08:46.083 "uuid": "f01babce-4325-4ca2-9b3c-8db3e9904ec9", 00:08:46.083 "strip_size_kb": 64, 00:08:46.083 "state": "configuring", 00:08:46.083 "raid_level": "raid0", 00:08:46.083 "superblock": true, 00:08:46.083 "num_base_bdevs": 3, 00:08:46.083 "num_base_bdevs_discovered": 2, 00:08:46.083 "num_base_bdevs_operational": 3, 00:08:46.083 "base_bdevs_list": [ 00:08:46.083 { 00:08:46.083 "name": null, 00:08:46.083 "uuid": "72bc65bb-9fac-4958-8a18-368976b29add", 00:08:46.083 "is_configured": false, 00:08:46.083 "data_offset": 0, 00:08:46.083 "data_size": 63488 00:08:46.083 }, 00:08:46.083 { 00:08:46.083 "name": "BaseBdev2", 00:08:46.083 "uuid": "7b718db6-afa9-4287-bc29-d2d6079a35b5", 00:08:46.083 "is_configured": true, 00:08:46.083 "data_offset": 2048, 00:08:46.083 "data_size": 63488 00:08:46.083 }, 00:08:46.083 { 00:08:46.083 "name": "BaseBdev3", 00:08:46.083 "uuid": "5f311193-0b1d-4054-919a-c9a27f52718b", 00:08:46.083 "is_configured": true, 00:08:46.083 "data_offset": 2048, 00:08:46.083 "data_size": 63488 00:08:46.083 } 00:08:46.083 ] 00:08:46.083 }' 00:08:46.083 10:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.083 10:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.342 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:46.342 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.342 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.342 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.342 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.342 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:46.342 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:46.342 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.342 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.342 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.342 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.342 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 72bc65bb-9fac-4958-8a18-368976b29add 00:08:46.342 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.342 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.600 [2024-11-15 10:53:53.286476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:46.600 [2024-11-15 10:53:53.286840] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:46.600 [2024-11-15 10:53:53.286905] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:46.600 [2024-11-15 10:53:53.287193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:46.600 [2024-11-15 10:53:53.287394] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:46.600 [2024-11-15 10:53:53.287438] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:46.600 NewBaseBdev 00:08:46.600 [2024-11-15 10:53:53.287628] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.600 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.600 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:46.600 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:08:46.600 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:46.600 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:46.600 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:46.600 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:46.600 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:46.600 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.600 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.600 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.600 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:46.600 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.600 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.600 [ 00:08:46.600 { 00:08:46.600 "name": "NewBaseBdev", 00:08:46.600 "aliases": [ 00:08:46.600 "72bc65bb-9fac-4958-8a18-368976b29add" 00:08:46.600 ], 00:08:46.600 "product_name": "Malloc disk", 00:08:46.600 "block_size": 512, 00:08:46.600 "num_blocks": 65536, 00:08:46.600 "uuid": "72bc65bb-9fac-4958-8a18-368976b29add", 00:08:46.600 "assigned_rate_limits": { 00:08:46.600 "rw_ios_per_sec": 0, 00:08:46.600 "rw_mbytes_per_sec": 0, 00:08:46.600 "r_mbytes_per_sec": 0, 00:08:46.600 "w_mbytes_per_sec": 0 00:08:46.600 }, 00:08:46.600 "claimed": true, 00:08:46.600 "claim_type": "exclusive_write", 00:08:46.600 "zoned": false, 00:08:46.600 "supported_io_types": { 00:08:46.600 "read": true, 00:08:46.600 "write": true, 00:08:46.600 "unmap": true, 00:08:46.600 "flush": true, 00:08:46.600 "reset": true, 00:08:46.600 "nvme_admin": false, 00:08:46.600 "nvme_io": false, 00:08:46.600 "nvme_io_md": false, 00:08:46.600 "write_zeroes": true, 00:08:46.600 "zcopy": true, 00:08:46.600 "get_zone_info": false, 00:08:46.600 "zone_management": false, 00:08:46.600 "zone_append": false, 00:08:46.600 "compare": false, 00:08:46.600 "compare_and_write": false, 00:08:46.600 "abort": true, 00:08:46.600 "seek_hole": false, 00:08:46.600 "seek_data": false, 00:08:46.600 "copy": true, 00:08:46.600 "nvme_iov_md": false 00:08:46.600 }, 00:08:46.600 "memory_domains": [ 00:08:46.600 { 00:08:46.600 "dma_device_id": "system", 00:08:46.600 "dma_device_type": 1 00:08:46.600 }, 00:08:46.600 { 00:08:46.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.600 "dma_device_type": 2 00:08:46.600 } 00:08:46.600 ], 00:08:46.600 "driver_specific": {} 00:08:46.600 } 00:08:46.600 ] 00:08:46.600 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.600 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:46.600 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:46.601 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.601 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.601 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.601 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.601 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.601 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.601 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.601 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.601 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.601 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.601 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.601 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.601 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.601 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.601 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.601 "name": "Existed_Raid", 00:08:46.601 "uuid": "f01babce-4325-4ca2-9b3c-8db3e9904ec9", 00:08:46.601 "strip_size_kb": 64, 00:08:46.601 "state": "online", 00:08:46.601 "raid_level": "raid0", 00:08:46.601 "superblock": true, 00:08:46.601 "num_base_bdevs": 3, 00:08:46.601 "num_base_bdevs_discovered": 3, 00:08:46.601 "num_base_bdevs_operational": 3, 00:08:46.601 "base_bdevs_list": [ 00:08:46.601 { 00:08:46.601 "name": "NewBaseBdev", 00:08:46.601 "uuid": "72bc65bb-9fac-4958-8a18-368976b29add", 00:08:46.601 "is_configured": true, 00:08:46.601 "data_offset": 2048, 00:08:46.601 "data_size": 63488 00:08:46.601 }, 00:08:46.601 { 00:08:46.601 "name": "BaseBdev2", 00:08:46.601 "uuid": "7b718db6-afa9-4287-bc29-d2d6079a35b5", 00:08:46.601 "is_configured": true, 00:08:46.601 "data_offset": 2048, 00:08:46.601 "data_size": 63488 00:08:46.601 }, 00:08:46.601 { 00:08:46.601 "name": "BaseBdev3", 00:08:46.601 "uuid": "5f311193-0b1d-4054-919a-c9a27f52718b", 00:08:46.601 "is_configured": true, 00:08:46.601 "data_offset": 2048, 00:08:46.601 "data_size": 63488 00:08:46.601 } 00:08:46.601 ] 00:08:46.601 }' 00:08:46.601 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.601 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.860 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:46.860 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:46.860 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:46.860 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:46.860 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:46.860 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:46.860 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:46.860 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:46.860 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.118 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.118 [2024-11-15 10:53:53.790037] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:47.118 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.119 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:47.119 "name": "Existed_Raid", 00:08:47.119 "aliases": [ 00:08:47.119 "f01babce-4325-4ca2-9b3c-8db3e9904ec9" 00:08:47.119 ], 00:08:47.119 "product_name": "Raid Volume", 00:08:47.119 "block_size": 512, 00:08:47.119 "num_blocks": 190464, 00:08:47.119 "uuid": "f01babce-4325-4ca2-9b3c-8db3e9904ec9", 00:08:47.119 "assigned_rate_limits": { 00:08:47.119 "rw_ios_per_sec": 0, 00:08:47.119 "rw_mbytes_per_sec": 0, 00:08:47.119 "r_mbytes_per_sec": 0, 00:08:47.119 "w_mbytes_per_sec": 0 00:08:47.119 }, 00:08:47.119 "claimed": false, 00:08:47.119 "zoned": false, 00:08:47.119 "supported_io_types": { 00:08:47.119 "read": true, 00:08:47.119 "write": true, 00:08:47.119 "unmap": true, 00:08:47.119 "flush": true, 00:08:47.119 "reset": true, 00:08:47.119 "nvme_admin": false, 00:08:47.119 "nvme_io": false, 00:08:47.119 "nvme_io_md": false, 00:08:47.119 "write_zeroes": true, 00:08:47.119 "zcopy": false, 00:08:47.119 "get_zone_info": false, 00:08:47.119 "zone_management": false, 00:08:47.119 "zone_append": false, 00:08:47.119 "compare": false, 00:08:47.119 "compare_and_write": false, 00:08:47.119 "abort": false, 00:08:47.119 "seek_hole": false, 00:08:47.119 "seek_data": false, 00:08:47.119 "copy": false, 00:08:47.119 "nvme_iov_md": false 00:08:47.119 }, 00:08:47.119 "memory_domains": [ 00:08:47.119 { 00:08:47.119 "dma_device_id": "system", 00:08:47.119 "dma_device_type": 1 00:08:47.119 }, 00:08:47.119 { 00:08:47.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.119 "dma_device_type": 2 00:08:47.119 }, 00:08:47.119 { 00:08:47.119 "dma_device_id": "system", 00:08:47.119 "dma_device_type": 1 00:08:47.119 }, 00:08:47.119 { 00:08:47.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.119 "dma_device_type": 2 00:08:47.119 }, 00:08:47.119 { 00:08:47.119 "dma_device_id": "system", 00:08:47.119 "dma_device_type": 1 00:08:47.119 }, 00:08:47.119 { 00:08:47.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.119 "dma_device_type": 2 00:08:47.119 } 00:08:47.119 ], 00:08:47.119 "driver_specific": { 00:08:47.119 "raid": { 00:08:47.119 "uuid": "f01babce-4325-4ca2-9b3c-8db3e9904ec9", 00:08:47.119 "strip_size_kb": 64, 00:08:47.119 "state": "online", 00:08:47.119 "raid_level": "raid0", 00:08:47.119 "superblock": true, 00:08:47.119 "num_base_bdevs": 3, 00:08:47.119 "num_base_bdevs_discovered": 3, 00:08:47.119 "num_base_bdevs_operational": 3, 00:08:47.119 "base_bdevs_list": [ 00:08:47.119 { 00:08:47.119 "name": "NewBaseBdev", 00:08:47.119 "uuid": "72bc65bb-9fac-4958-8a18-368976b29add", 00:08:47.119 "is_configured": true, 00:08:47.119 "data_offset": 2048, 00:08:47.119 "data_size": 63488 00:08:47.119 }, 00:08:47.119 { 00:08:47.119 "name": "BaseBdev2", 00:08:47.119 "uuid": "7b718db6-afa9-4287-bc29-d2d6079a35b5", 00:08:47.119 "is_configured": true, 00:08:47.119 "data_offset": 2048, 00:08:47.119 "data_size": 63488 00:08:47.119 }, 00:08:47.119 { 00:08:47.119 "name": "BaseBdev3", 00:08:47.119 "uuid": "5f311193-0b1d-4054-919a-c9a27f52718b", 00:08:47.119 "is_configured": true, 00:08:47.119 "data_offset": 2048, 00:08:47.119 "data_size": 63488 00:08:47.119 } 00:08:47.119 ] 00:08:47.119 } 00:08:47.119 } 00:08:47.119 }' 00:08:47.119 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:47.119 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:47.119 BaseBdev2 00:08:47.119 BaseBdev3' 00:08:47.119 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.119 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:47.119 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.119 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:47.119 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.119 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.119 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.119 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.119 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.119 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.119 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.119 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:47.119 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.119 10:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.119 10:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.119 10:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.119 10:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.119 10:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.119 10:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.119 10:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:47.119 10:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.119 10:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.119 10:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.119 10:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.377 10:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.377 10:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.377 10:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:47.377 10:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.377 10:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.377 [2024-11-15 10:53:54.077229] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:47.377 [2024-11-15 10:53:54.077269] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:47.377 [2024-11-15 10:53:54.077377] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.377 [2024-11-15 10:53:54.077443] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:47.378 [2024-11-15 10:53:54.077562] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:47.378 10:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.378 10:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64585 00:08:47.378 10:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 64585 ']' 00:08:47.378 10:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 64585 00:08:47.378 10:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:08:47.378 10:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:47.378 10:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64585 00:08:47.378 10:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:47.378 10:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:47.378 killing process with pid 64585 00:08:47.378 10:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64585' 00:08:47.378 10:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 64585 00:08:47.378 [2024-11-15 10:53:54.130020] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:47.378 10:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 64585 00:08:47.636 [2024-11-15 10:53:54.445231] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:49.013 10:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:49.013 00:08:49.013 real 0m10.909s 00:08:49.013 user 0m17.304s 00:08:49.013 sys 0m2.012s 00:08:49.013 10:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:49.013 10:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.013 ************************************ 00:08:49.013 END TEST raid_state_function_test_sb 00:08:49.013 ************************************ 00:08:49.013 10:53:55 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:49.013 10:53:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:49.013 10:53:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:49.013 10:53:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:49.013 ************************************ 00:08:49.013 START TEST raid_superblock_test 00:08:49.013 ************************************ 00:08:49.013 10:53:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 3 00:08:49.013 10:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:49.013 10:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:49.013 10:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:49.013 10:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:49.013 10:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:49.013 10:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:49.013 10:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:49.013 10:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:49.013 10:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:49.013 10:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:49.013 10:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:49.013 10:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:49.013 10:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:49.013 10:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:49.013 10:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:49.013 10:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:49.013 10:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65211 00:08:49.013 10:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:49.013 10:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65211 00:08:49.013 10:53:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 65211 ']' 00:08:49.013 10:53:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.013 10:53:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:49.013 10:53:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.013 10:53:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:49.013 10:53:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.013 [2024-11-15 10:53:55.770385] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:08:49.013 [2024-11-15 10:53:55.770588] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65211 ] 00:08:49.272 [2024-11-15 10:53:55.945521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.272 [2024-11-15 10:53:56.071280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.675 [2024-11-15 10:53:56.275661] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.675 [2024-11-15 10:53:56.275793] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.935 malloc1 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.935 [2024-11-15 10:53:56.683195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:49.935 [2024-11-15 10:53:56.683366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.935 [2024-11-15 10:53:56.683413] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:49.935 [2024-11-15 10:53:56.683447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.935 [2024-11-15 10:53:56.685620] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.935 [2024-11-15 10:53:56.685704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:49.935 pt1 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.935 malloc2 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.935 [2024-11-15 10:53:56.739014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:49.935 [2024-11-15 10:53:56.739156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.935 [2024-11-15 10:53:56.739197] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:49.935 [2024-11-15 10:53:56.739251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.935 [2024-11-15 10:53:56.741400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.935 [2024-11-15 10:53:56.741477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:49.935 pt2 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:49.935 10:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.936 10:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.936 malloc3 00:08:49.936 10:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.936 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:49.936 10:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.936 10:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.936 [2024-11-15 10:53:56.813064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:49.936 [2024-11-15 10:53:56.813229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.936 [2024-11-15 10:53:56.813284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:49.936 [2024-11-15 10:53:56.813335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.936 [2024-11-15 10:53:56.815543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.936 [2024-11-15 10:53:56.815620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:49.936 pt3 00:08:49.936 10:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.936 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:49.936 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:49.936 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:49.936 10:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.936 10:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.936 [2024-11-15 10:53:56.825067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:49.936 [2024-11-15 10:53:56.826906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:49.936 [2024-11-15 10:53:56.826963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:49.936 [2024-11-15 10:53:56.827112] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:49.936 [2024-11-15 10:53:56.827125] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:49.936 [2024-11-15 10:53:56.827475] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:49.936 [2024-11-15 10:53:56.827695] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:49.936 [2024-11-15 10:53:56.827739] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:49.936 [2024-11-15 10:53:56.827955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.936 10:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.936 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:49.936 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:49.936 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.936 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.936 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.936 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.936 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.936 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.936 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.936 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.936 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.936 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:49.936 10:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.936 10:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.936 10:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.193 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.193 "name": "raid_bdev1", 00:08:50.193 "uuid": "51b7f0cc-92ad-47b8-b277-6efee717dbce", 00:08:50.193 "strip_size_kb": 64, 00:08:50.193 "state": "online", 00:08:50.193 "raid_level": "raid0", 00:08:50.193 "superblock": true, 00:08:50.193 "num_base_bdevs": 3, 00:08:50.193 "num_base_bdevs_discovered": 3, 00:08:50.193 "num_base_bdevs_operational": 3, 00:08:50.193 "base_bdevs_list": [ 00:08:50.193 { 00:08:50.193 "name": "pt1", 00:08:50.193 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:50.193 "is_configured": true, 00:08:50.193 "data_offset": 2048, 00:08:50.193 "data_size": 63488 00:08:50.193 }, 00:08:50.193 { 00:08:50.193 "name": "pt2", 00:08:50.193 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:50.193 "is_configured": true, 00:08:50.193 "data_offset": 2048, 00:08:50.193 "data_size": 63488 00:08:50.193 }, 00:08:50.194 { 00:08:50.194 "name": "pt3", 00:08:50.194 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:50.194 "is_configured": true, 00:08:50.194 "data_offset": 2048, 00:08:50.194 "data_size": 63488 00:08:50.194 } 00:08:50.194 ] 00:08:50.194 }' 00:08:50.194 10:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.194 10:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.452 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:50.452 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:50.452 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:50.452 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:50.452 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:50.452 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:50.452 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:50.452 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:50.452 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.452 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.452 [2024-11-15 10:53:57.292587] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.452 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.452 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:50.452 "name": "raid_bdev1", 00:08:50.452 "aliases": [ 00:08:50.452 "51b7f0cc-92ad-47b8-b277-6efee717dbce" 00:08:50.452 ], 00:08:50.452 "product_name": "Raid Volume", 00:08:50.452 "block_size": 512, 00:08:50.452 "num_blocks": 190464, 00:08:50.452 "uuid": "51b7f0cc-92ad-47b8-b277-6efee717dbce", 00:08:50.452 "assigned_rate_limits": { 00:08:50.452 "rw_ios_per_sec": 0, 00:08:50.452 "rw_mbytes_per_sec": 0, 00:08:50.452 "r_mbytes_per_sec": 0, 00:08:50.452 "w_mbytes_per_sec": 0 00:08:50.452 }, 00:08:50.452 "claimed": false, 00:08:50.452 "zoned": false, 00:08:50.452 "supported_io_types": { 00:08:50.452 "read": true, 00:08:50.452 "write": true, 00:08:50.452 "unmap": true, 00:08:50.452 "flush": true, 00:08:50.452 "reset": true, 00:08:50.452 "nvme_admin": false, 00:08:50.452 "nvme_io": false, 00:08:50.452 "nvme_io_md": false, 00:08:50.452 "write_zeroes": true, 00:08:50.452 "zcopy": false, 00:08:50.452 "get_zone_info": false, 00:08:50.452 "zone_management": false, 00:08:50.452 "zone_append": false, 00:08:50.452 "compare": false, 00:08:50.452 "compare_and_write": false, 00:08:50.452 "abort": false, 00:08:50.452 "seek_hole": false, 00:08:50.452 "seek_data": false, 00:08:50.452 "copy": false, 00:08:50.452 "nvme_iov_md": false 00:08:50.452 }, 00:08:50.452 "memory_domains": [ 00:08:50.452 { 00:08:50.452 "dma_device_id": "system", 00:08:50.452 "dma_device_type": 1 00:08:50.452 }, 00:08:50.452 { 00:08:50.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.452 "dma_device_type": 2 00:08:50.452 }, 00:08:50.452 { 00:08:50.452 "dma_device_id": "system", 00:08:50.452 "dma_device_type": 1 00:08:50.452 }, 00:08:50.452 { 00:08:50.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.452 "dma_device_type": 2 00:08:50.452 }, 00:08:50.452 { 00:08:50.452 "dma_device_id": "system", 00:08:50.452 "dma_device_type": 1 00:08:50.452 }, 00:08:50.452 { 00:08:50.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.452 "dma_device_type": 2 00:08:50.452 } 00:08:50.452 ], 00:08:50.452 "driver_specific": { 00:08:50.452 "raid": { 00:08:50.452 "uuid": "51b7f0cc-92ad-47b8-b277-6efee717dbce", 00:08:50.452 "strip_size_kb": 64, 00:08:50.452 "state": "online", 00:08:50.452 "raid_level": "raid0", 00:08:50.452 "superblock": true, 00:08:50.452 "num_base_bdevs": 3, 00:08:50.452 "num_base_bdevs_discovered": 3, 00:08:50.452 "num_base_bdevs_operational": 3, 00:08:50.452 "base_bdevs_list": [ 00:08:50.452 { 00:08:50.452 "name": "pt1", 00:08:50.452 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:50.452 "is_configured": true, 00:08:50.452 "data_offset": 2048, 00:08:50.452 "data_size": 63488 00:08:50.452 }, 00:08:50.452 { 00:08:50.452 "name": "pt2", 00:08:50.452 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:50.452 "is_configured": true, 00:08:50.452 "data_offset": 2048, 00:08:50.452 "data_size": 63488 00:08:50.452 }, 00:08:50.452 { 00:08:50.452 "name": "pt3", 00:08:50.452 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:50.452 "is_configured": true, 00:08:50.452 "data_offset": 2048, 00:08:50.452 "data_size": 63488 00:08:50.452 } 00:08:50.452 ] 00:08:50.452 } 00:08:50.452 } 00:08:50.452 }' 00:08:50.452 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:50.711 pt2 00:08:50.711 pt3' 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:50.711 [2024-11-15 10:53:57.548126] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.711 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.712 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=51b7f0cc-92ad-47b8-b277-6efee717dbce 00:08:50.712 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 51b7f0cc-92ad-47b8-b277-6efee717dbce ']' 00:08:50.712 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:50.712 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.712 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.712 [2024-11-15 10:53:57.579757] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:50.712 [2024-11-15 10:53:57.579785] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:50.712 [2024-11-15 10:53:57.579866] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.712 [2024-11-15 10:53:57.579965] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:50.712 [2024-11-15 10:53:57.579975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:50.712 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.712 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.712 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.712 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.712 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:50.712 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.712 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:50.712 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:50.712 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:50.712 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:50.712 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.712 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.972 [2024-11-15 10:53:57.731579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:50.972 [2024-11-15 10:53:57.733562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:50.972 [2024-11-15 10:53:57.733662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:50.972 [2024-11-15 10:53:57.733733] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:50.972 [2024-11-15 10:53:57.733823] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:50.972 [2024-11-15 10:53:57.733880] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:50.972 [2024-11-15 10:53:57.733957] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:50.972 [2024-11-15 10:53:57.733990] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:50.972 request: 00:08:50.972 { 00:08:50.972 "name": "raid_bdev1", 00:08:50.972 "raid_level": "raid0", 00:08:50.972 "base_bdevs": [ 00:08:50.972 "malloc1", 00:08:50.972 "malloc2", 00:08:50.972 "malloc3" 00:08:50.972 ], 00:08:50.972 "strip_size_kb": 64, 00:08:50.972 "superblock": false, 00:08:50.972 "method": "bdev_raid_create", 00:08:50.972 "req_id": 1 00:08:50.972 } 00:08:50.972 Got JSON-RPC error response 00:08:50.972 response: 00:08:50.972 { 00:08:50.972 "code": -17, 00:08:50.972 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:50.972 } 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.972 [2024-11-15 10:53:57.803400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:50.972 [2024-11-15 10:53:57.803511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.972 [2024-11-15 10:53:57.803548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:50.972 [2024-11-15 10:53:57.803612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.972 [2024-11-15 10:53:57.805926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.972 [2024-11-15 10:53:57.806002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:50.972 [2024-11-15 10:53:57.806115] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:50.972 [2024-11-15 10:53:57.806204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:50.972 pt1 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.972 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.972 "name": "raid_bdev1", 00:08:50.972 "uuid": "51b7f0cc-92ad-47b8-b277-6efee717dbce", 00:08:50.972 "strip_size_kb": 64, 00:08:50.972 "state": "configuring", 00:08:50.972 "raid_level": "raid0", 00:08:50.972 "superblock": true, 00:08:50.972 "num_base_bdevs": 3, 00:08:50.972 "num_base_bdevs_discovered": 1, 00:08:50.972 "num_base_bdevs_operational": 3, 00:08:50.973 "base_bdevs_list": [ 00:08:50.973 { 00:08:50.973 "name": "pt1", 00:08:50.973 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:50.973 "is_configured": true, 00:08:50.973 "data_offset": 2048, 00:08:50.973 "data_size": 63488 00:08:50.973 }, 00:08:50.973 { 00:08:50.973 "name": null, 00:08:50.973 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:50.973 "is_configured": false, 00:08:50.973 "data_offset": 2048, 00:08:50.973 "data_size": 63488 00:08:50.973 }, 00:08:50.973 { 00:08:50.973 "name": null, 00:08:50.973 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:50.973 "is_configured": false, 00:08:50.973 "data_offset": 2048, 00:08:50.973 "data_size": 63488 00:08:50.973 } 00:08:50.973 ] 00:08:50.973 }' 00:08:50.973 10:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.973 10:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.540 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:51.540 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:51.540 10:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.540 10:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.540 [2024-11-15 10:53:58.250643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:51.540 [2024-11-15 10:53:58.250846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.540 [2024-11-15 10:53:58.250897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:51.540 [2024-11-15 10:53:58.250935] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.540 [2024-11-15 10:53:58.251418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.540 [2024-11-15 10:53:58.251476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:51.540 [2024-11-15 10:53:58.251597] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:51.540 [2024-11-15 10:53:58.251647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:51.540 pt2 00:08:51.540 10:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.540 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:51.540 10:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.540 10:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.540 [2024-11-15 10:53:58.262594] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:51.540 10:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.540 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:51.540 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.540 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.540 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.540 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.540 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.540 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.540 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.540 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.540 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.540 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.540 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.540 10:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.540 10:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.540 10:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.540 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.540 "name": "raid_bdev1", 00:08:51.540 "uuid": "51b7f0cc-92ad-47b8-b277-6efee717dbce", 00:08:51.540 "strip_size_kb": 64, 00:08:51.540 "state": "configuring", 00:08:51.540 "raid_level": "raid0", 00:08:51.540 "superblock": true, 00:08:51.540 "num_base_bdevs": 3, 00:08:51.540 "num_base_bdevs_discovered": 1, 00:08:51.540 "num_base_bdevs_operational": 3, 00:08:51.540 "base_bdevs_list": [ 00:08:51.540 { 00:08:51.540 "name": "pt1", 00:08:51.540 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:51.540 "is_configured": true, 00:08:51.540 "data_offset": 2048, 00:08:51.540 "data_size": 63488 00:08:51.540 }, 00:08:51.540 { 00:08:51.540 "name": null, 00:08:51.540 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:51.540 "is_configured": false, 00:08:51.540 "data_offset": 0, 00:08:51.540 "data_size": 63488 00:08:51.540 }, 00:08:51.540 { 00:08:51.540 "name": null, 00:08:51.540 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:51.540 "is_configured": false, 00:08:51.540 "data_offset": 2048, 00:08:51.540 "data_size": 63488 00:08:51.540 } 00:08:51.540 ] 00:08:51.540 }' 00:08:51.540 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.540 10:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.799 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:51.799 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:51.799 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:51.799 10:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.799 10:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.799 [2024-11-15 10:53:58.717786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:51.799 [2024-11-15 10:53:58.717860] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.799 [2024-11-15 10:53:58.717878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:51.799 [2024-11-15 10:53:58.717889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.799 [2024-11-15 10:53:58.718327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.799 [2024-11-15 10:53:58.718348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:51.799 [2024-11-15 10:53:58.718427] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:51.799 [2024-11-15 10:53:58.718451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:52.058 pt2 00:08:52.058 10:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.058 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:52.058 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:52.058 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:52.058 10:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.058 10:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.058 [2024-11-15 10:53:58.729744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:52.058 [2024-11-15 10:53:58.729796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.058 [2024-11-15 10:53:58.729810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:52.058 [2024-11-15 10:53:58.729819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.058 [2024-11-15 10:53:58.730189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.058 [2024-11-15 10:53:58.730210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:52.058 [2024-11-15 10:53:58.730273] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:52.058 [2024-11-15 10:53:58.730293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:52.058 [2024-11-15 10:53:58.730430] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:52.059 [2024-11-15 10:53:58.730442] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:52.059 [2024-11-15 10:53:58.730713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:52.059 [2024-11-15 10:53:58.730860] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:52.059 [2024-11-15 10:53:58.730876] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:52.059 [2024-11-15 10:53:58.731026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.059 pt3 00:08:52.059 10:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.059 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:52.059 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:52.059 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:52.059 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.059 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:52.059 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:52.059 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.059 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.059 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.059 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.059 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.059 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.059 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.059 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.059 10:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.059 10:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.059 10:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.059 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.059 "name": "raid_bdev1", 00:08:52.059 "uuid": "51b7f0cc-92ad-47b8-b277-6efee717dbce", 00:08:52.059 "strip_size_kb": 64, 00:08:52.059 "state": "online", 00:08:52.059 "raid_level": "raid0", 00:08:52.059 "superblock": true, 00:08:52.059 "num_base_bdevs": 3, 00:08:52.059 "num_base_bdevs_discovered": 3, 00:08:52.059 "num_base_bdevs_operational": 3, 00:08:52.059 "base_bdevs_list": [ 00:08:52.059 { 00:08:52.059 "name": "pt1", 00:08:52.059 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:52.059 "is_configured": true, 00:08:52.059 "data_offset": 2048, 00:08:52.059 "data_size": 63488 00:08:52.059 }, 00:08:52.059 { 00:08:52.059 "name": "pt2", 00:08:52.059 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:52.059 "is_configured": true, 00:08:52.059 "data_offset": 2048, 00:08:52.059 "data_size": 63488 00:08:52.059 }, 00:08:52.059 { 00:08:52.059 "name": "pt3", 00:08:52.059 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:52.059 "is_configured": true, 00:08:52.059 "data_offset": 2048, 00:08:52.059 "data_size": 63488 00:08:52.059 } 00:08:52.059 ] 00:08:52.059 }' 00:08:52.059 10:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.059 10:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.317 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:52.317 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:52.317 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:52.317 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:52.317 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:52.317 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:52.317 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:52.317 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:52.317 10:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.317 10:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.317 [2024-11-15 10:53:59.181388] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.317 10:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.317 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:52.317 "name": "raid_bdev1", 00:08:52.317 "aliases": [ 00:08:52.317 "51b7f0cc-92ad-47b8-b277-6efee717dbce" 00:08:52.317 ], 00:08:52.317 "product_name": "Raid Volume", 00:08:52.317 "block_size": 512, 00:08:52.317 "num_blocks": 190464, 00:08:52.317 "uuid": "51b7f0cc-92ad-47b8-b277-6efee717dbce", 00:08:52.317 "assigned_rate_limits": { 00:08:52.317 "rw_ios_per_sec": 0, 00:08:52.317 "rw_mbytes_per_sec": 0, 00:08:52.317 "r_mbytes_per_sec": 0, 00:08:52.317 "w_mbytes_per_sec": 0 00:08:52.317 }, 00:08:52.317 "claimed": false, 00:08:52.317 "zoned": false, 00:08:52.317 "supported_io_types": { 00:08:52.317 "read": true, 00:08:52.317 "write": true, 00:08:52.317 "unmap": true, 00:08:52.317 "flush": true, 00:08:52.317 "reset": true, 00:08:52.317 "nvme_admin": false, 00:08:52.317 "nvme_io": false, 00:08:52.317 "nvme_io_md": false, 00:08:52.317 "write_zeroes": true, 00:08:52.317 "zcopy": false, 00:08:52.317 "get_zone_info": false, 00:08:52.317 "zone_management": false, 00:08:52.317 "zone_append": false, 00:08:52.317 "compare": false, 00:08:52.317 "compare_and_write": false, 00:08:52.317 "abort": false, 00:08:52.317 "seek_hole": false, 00:08:52.317 "seek_data": false, 00:08:52.317 "copy": false, 00:08:52.317 "nvme_iov_md": false 00:08:52.317 }, 00:08:52.317 "memory_domains": [ 00:08:52.317 { 00:08:52.317 "dma_device_id": "system", 00:08:52.317 "dma_device_type": 1 00:08:52.317 }, 00:08:52.317 { 00:08:52.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.317 "dma_device_type": 2 00:08:52.317 }, 00:08:52.317 { 00:08:52.317 "dma_device_id": "system", 00:08:52.317 "dma_device_type": 1 00:08:52.317 }, 00:08:52.317 { 00:08:52.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.317 "dma_device_type": 2 00:08:52.317 }, 00:08:52.317 { 00:08:52.317 "dma_device_id": "system", 00:08:52.317 "dma_device_type": 1 00:08:52.317 }, 00:08:52.317 { 00:08:52.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.317 "dma_device_type": 2 00:08:52.317 } 00:08:52.317 ], 00:08:52.317 "driver_specific": { 00:08:52.317 "raid": { 00:08:52.317 "uuid": "51b7f0cc-92ad-47b8-b277-6efee717dbce", 00:08:52.317 "strip_size_kb": 64, 00:08:52.317 "state": "online", 00:08:52.317 "raid_level": "raid0", 00:08:52.317 "superblock": true, 00:08:52.317 "num_base_bdevs": 3, 00:08:52.317 "num_base_bdevs_discovered": 3, 00:08:52.317 "num_base_bdevs_operational": 3, 00:08:52.317 "base_bdevs_list": [ 00:08:52.317 { 00:08:52.317 "name": "pt1", 00:08:52.317 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:52.317 "is_configured": true, 00:08:52.317 "data_offset": 2048, 00:08:52.317 "data_size": 63488 00:08:52.317 }, 00:08:52.317 { 00:08:52.317 "name": "pt2", 00:08:52.317 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:52.317 "is_configured": true, 00:08:52.317 "data_offset": 2048, 00:08:52.317 "data_size": 63488 00:08:52.317 }, 00:08:52.318 { 00:08:52.318 "name": "pt3", 00:08:52.318 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:52.318 "is_configured": true, 00:08:52.318 "data_offset": 2048, 00:08:52.318 "data_size": 63488 00:08:52.318 } 00:08:52.318 ] 00:08:52.318 } 00:08:52.318 } 00:08:52.318 }' 00:08:52.318 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:52.576 pt2 00:08:52.576 pt3' 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.576 [2024-11-15 10:53:59.393013] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 51b7f0cc-92ad-47b8-b277-6efee717dbce '!=' 51b7f0cc-92ad-47b8-b277-6efee717dbce ']' 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65211 00:08:52.576 10:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 65211 ']' 00:08:52.577 10:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 65211 00:08:52.577 10:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:08:52.577 10:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:52.577 10:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65211 00:08:52.577 killing process with pid 65211 00:08:52.577 10:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:52.577 10:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:52.577 10:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65211' 00:08:52.577 10:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 65211 00:08:52.577 [2024-11-15 10:53:59.474365] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:52.577 [2024-11-15 10:53:59.474479] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:52.577 [2024-11-15 10:53:59.474554] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:52.577 [2024-11-15 10:53:59.474565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:52.577 10:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 65211 00:08:53.144 [2024-11-15 10:53:59.778021] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:54.079 10:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:54.079 00:08:54.079 real 0m5.268s 00:08:54.079 user 0m7.495s 00:08:54.079 sys 0m0.907s 00:08:54.079 10:54:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:54.079 10:54:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.079 ************************************ 00:08:54.079 END TEST raid_superblock_test 00:08:54.079 ************************************ 00:08:54.339 10:54:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:54.339 10:54:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:54.339 10:54:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:54.339 10:54:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:54.339 ************************************ 00:08:54.339 START TEST raid_read_error_test 00:08:54.339 ************************************ 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 read 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:54.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wt3Qz9okdP 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65464 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65464 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 65464 ']' 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.339 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:54.339 [2024-11-15 10:54:01.130462] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:08:54.339 [2024-11-15 10:54:01.130596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65464 ] 00:08:54.598 [2024-11-15 10:54:01.308344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.598 [2024-11-15 10:54:01.423574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.856 [2024-11-15 10:54:01.627408] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.856 [2024-11-15 10:54:01.627473] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.115 10:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:55.115 10:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:55.115 10:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:55.115 10:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:55.115 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.115 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.375 BaseBdev1_malloc 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.375 true 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.375 [2024-11-15 10:54:02.066606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:55.375 [2024-11-15 10:54:02.066727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.375 [2024-11-15 10:54:02.066774] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:55.375 [2024-11-15 10:54:02.066811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.375 [2024-11-15 10:54:02.069244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.375 [2024-11-15 10:54:02.069338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:55.375 BaseBdev1 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.375 BaseBdev2_malloc 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.375 true 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.375 [2024-11-15 10:54:02.126980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:55.375 [2024-11-15 10:54:02.127091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.375 [2024-11-15 10:54:02.127125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:55.375 [2024-11-15 10:54:02.127157] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.375 [2024-11-15 10:54:02.129180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.375 [2024-11-15 10:54:02.129254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:55.375 BaseBdev2 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.375 BaseBdev3_malloc 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.375 true 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.375 [2024-11-15 10:54:02.204631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:55.375 [2024-11-15 10:54:02.204743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.375 [2024-11-15 10:54:02.204771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:55.375 [2024-11-15 10:54:02.204784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.375 [2024-11-15 10:54:02.207166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.375 [2024-11-15 10:54:02.207211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:55.375 BaseBdev3 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.375 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.375 [2024-11-15 10:54:02.216672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:55.375 [2024-11-15 10:54:02.218696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:55.375 [2024-11-15 10:54:02.218841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:55.376 [2024-11-15 10:54:02.219126] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:55.376 [2024-11-15 10:54:02.219181] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:55.376 [2024-11-15 10:54:02.219515] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:55.376 [2024-11-15 10:54:02.219713] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:55.376 [2024-11-15 10:54:02.219758] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:55.376 [2024-11-15 10:54:02.219969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.376 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.376 10:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:55.376 10:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.376 10:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.376 10:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.376 10:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.376 10:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.376 10:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.376 10:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.376 10:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.376 10:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.376 10:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.376 10:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.376 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.376 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.376 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.376 10:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.376 "name": "raid_bdev1", 00:08:55.376 "uuid": "cf1dc162-5404-4639-9c62-377311e0b412", 00:08:55.376 "strip_size_kb": 64, 00:08:55.376 "state": "online", 00:08:55.376 "raid_level": "raid0", 00:08:55.376 "superblock": true, 00:08:55.376 "num_base_bdevs": 3, 00:08:55.376 "num_base_bdevs_discovered": 3, 00:08:55.376 "num_base_bdevs_operational": 3, 00:08:55.376 "base_bdevs_list": [ 00:08:55.376 { 00:08:55.376 "name": "BaseBdev1", 00:08:55.376 "uuid": "4209df33-d455-59a4-b379-8664b1e9121a", 00:08:55.376 "is_configured": true, 00:08:55.376 "data_offset": 2048, 00:08:55.376 "data_size": 63488 00:08:55.376 }, 00:08:55.376 { 00:08:55.376 "name": "BaseBdev2", 00:08:55.376 "uuid": "89609a86-c8d6-51d4-ab22-1ac50338d183", 00:08:55.376 "is_configured": true, 00:08:55.376 "data_offset": 2048, 00:08:55.376 "data_size": 63488 00:08:55.376 }, 00:08:55.376 { 00:08:55.376 "name": "BaseBdev3", 00:08:55.376 "uuid": "9e0d3d42-d967-59a9-abc6-8bb6126131ff", 00:08:55.376 "is_configured": true, 00:08:55.376 "data_offset": 2048, 00:08:55.376 "data_size": 63488 00:08:55.376 } 00:08:55.376 ] 00:08:55.376 }' 00:08:55.376 10:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.376 10:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.942 10:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:55.942 10:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:55.942 [2024-11-15 10:54:02.761239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:56.876 10:54:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:56.876 10:54:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.876 10:54:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.876 10:54:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.876 10:54:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:56.876 10:54:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:56.876 10:54:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:56.876 10:54:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:56.876 10:54:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:56.876 10:54:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:56.876 10:54:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.876 10:54:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.876 10:54:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.876 10:54:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.876 10:54:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.876 10:54:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.876 10:54:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.876 10:54:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.876 10:54:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.876 10:54:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:56.876 10:54:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.876 10:54:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.876 10:54:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.876 "name": "raid_bdev1", 00:08:56.876 "uuid": "cf1dc162-5404-4639-9c62-377311e0b412", 00:08:56.876 "strip_size_kb": 64, 00:08:56.876 "state": "online", 00:08:56.876 "raid_level": "raid0", 00:08:56.876 "superblock": true, 00:08:56.876 "num_base_bdevs": 3, 00:08:56.876 "num_base_bdevs_discovered": 3, 00:08:56.876 "num_base_bdevs_operational": 3, 00:08:56.876 "base_bdevs_list": [ 00:08:56.876 { 00:08:56.876 "name": "BaseBdev1", 00:08:56.876 "uuid": "4209df33-d455-59a4-b379-8664b1e9121a", 00:08:56.876 "is_configured": true, 00:08:56.876 "data_offset": 2048, 00:08:56.876 "data_size": 63488 00:08:56.876 }, 00:08:56.876 { 00:08:56.876 "name": "BaseBdev2", 00:08:56.876 "uuid": "89609a86-c8d6-51d4-ab22-1ac50338d183", 00:08:56.876 "is_configured": true, 00:08:56.876 "data_offset": 2048, 00:08:56.876 "data_size": 63488 00:08:56.876 }, 00:08:56.876 { 00:08:56.876 "name": "BaseBdev3", 00:08:56.876 "uuid": "9e0d3d42-d967-59a9-abc6-8bb6126131ff", 00:08:56.876 "is_configured": true, 00:08:56.876 "data_offset": 2048, 00:08:56.876 "data_size": 63488 00:08:56.876 } 00:08:56.876 ] 00:08:56.876 }' 00:08:56.876 10:54:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.876 10:54:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.441 10:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:57.441 10:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.441 10:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.441 [2024-11-15 10:54:04.185550] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:57.441 [2024-11-15 10:54:04.185633] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:57.441 [2024-11-15 10:54:04.188773] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:57.441 [2024-11-15 10:54:04.188863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.441 [2024-11-15 10:54:04.188910] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:57.441 [2024-11-15 10:54:04.188920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:57.441 { 00:08:57.441 "results": [ 00:08:57.441 { 00:08:57.441 "job": "raid_bdev1", 00:08:57.441 "core_mask": "0x1", 00:08:57.441 "workload": "randrw", 00:08:57.441 "percentage": 50, 00:08:57.441 "status": "finished", 00:08:57.441 "queue_depth": 1, 00:08:57.441 "io_size": 131072, 00:08:57.441 "runtime": 1.425399, 00:08:57.441 "iops": 14934.76563404352, 00:08:57.441 "mibps": 1866.84570425544, 00:08:57.441 "io_failed": 1, 00:08:57.441 "io_timeout": 0, 00:08:57.441 "avg_latency_us": 92.92112354392586, 00:08:57.441 "min_latency_us": 26.717903930131005, 00:08:57.441 "max_latency_us": 1459.5353711790392 00:08:57.441 } 00:08:57.441 ], 00:08:57.441 "core_count": 1 00:08:57.441 } 00:08:57.441 10:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.441 10:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65464 00:08:57.441 10:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 65464 ']' 00:08:57.441 10:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 65464 00:08:57.441 10:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:08:57.441 10:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:57.442 10:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65464 00:08:57.442 10:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:57.442 10:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:57.442 killing process with pid 65464 00:08:57.442 10:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65464' 00:08:57.442 10:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 65464 00:08:57.442 10:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 65464 00:08:57.442 [2024-11-15 10:54:04.239235] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:57.699 [2024-11-15 10:54:04.497458] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:59.126 10:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wt3Qz9okdP 00:08:59.126 10:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:59.126 10:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:59.126 10:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:08:59.126 10:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:59.126 10:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:59.126 10:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:59.126 ************************************ 00:08:59.126 END TEST raid_read_error_test 00:08:59.126 ************************************ 00:08:59.126 10:54:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:08:59.126 00:08:59.126 real 0m4.755s 00:08:59.126 user 0m5.678s 00:08:59.126 sys 0m0.595s 00:08:59.126 10:54:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:59.126 10:54:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.126 10:54:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:59.126 10:54:05 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:59.126 10:54:05 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:59.126 10:54:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:59.126 ************************************ 00:08:59.126 START TEST raid_write_error_test 00:08:59.126 ************************************ 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 write 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QIqlQ64VD1 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65610 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65610 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 65610 ']' 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:59.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:59.126 10:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.126 [2024-11-15 10:54:05.955242] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:08:59.126 [2024-11-15 10:54:05.955377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65610 ] 00:08:59.384 [2024-11-15 10:54:06.130014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.384 [2024-11-15 10:54:06.249778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.642 [2024-11-15 10:54:06.462214] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.642 [2024-11-15 10:54:06.462252] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.207 10:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:00.207 10:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:00.207 10:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:00.207 10:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:00.207 10:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.207 10:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.207 BaseBdev1_malloc 00:09:00.207 10:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.207 10:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:00.207 10:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.207 10:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.207 true 00:09:00.207 10:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.207 10:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:00.207 10:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.207 10:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.207 [2024-11-15 10:54:06.920727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:00.207 [2024-11-15 10:54:06.920785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.207 [2024-11-15 10:54:06.920804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:00.208 [2024-11-15 10:54:06.920815] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.208 [2024-11-15 10:54:06.922991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.208 [2024-11-15 10:54:06.923033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:00.208 BaseBdev1 00:09:00.208 10:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.208 10:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:00.208 10:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:00.208 10:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.208 10:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.208 BaseBdev2_malloc 00:09:00.208 10:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.208 10:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:00.208 10:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.208 10:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.208 true 00:09:00.208 10:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.208 10:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:00.208 10:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.208 10:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.208 [2024-11-15 10:54:06.990379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:00.208 [2024-11-15 10:54:06.990439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.208 [2024-11-15 10:54:06.990456] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:00.208 [2024-11-15 10:54:06.990466] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.208 [2024-11-15 10:54:06.992806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.208 [2024-11-15 10:54:06.992856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:00.208 BaseBdev2 00:09:00.208 10:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.208 10:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:00.208 10:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:00.208 10:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.208 10:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.208 BaseBdev3_malloc 00:09:00.208 10:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.208 10:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:00.208 10:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.208 10:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.208 true 00:09:00.208 10:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.208 10:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:00.208 10:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.208 10:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.208 [2024-11-15 10:54:07.070811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:00.208 [2024-11-15 10:54:07.070868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.208 [2024-11-15 10:54:07.070886] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:00.208 [2024-11-15 10:54:07.070898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.208 [2024-11-15 10:54:07.073134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.208 [2024-11-15 10:54:07.073177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:00.208 BaseBdev3 00:09:00.208 10:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.208 10:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:00.208 10:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.208 10:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.208 [2024-11-15 10:54:07.082859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:00.208 [2024-11-15 10:54:07.084818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:00.208 [2024-11-15 10:54:07.084905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:00.208 [2024-11-15 10:54:07.085107] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:00.208 [2024-11-15 10:54:07.085155] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:00.208 [2024-11-15 10:54:07.085449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:00.208 [2024-11-15 10:54:07.085635] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:00.208 [2024-11-15 10:54:07.085658] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:00.208 [2024-11-15 10:54:07.085826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.208 10:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.208 10:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:00.208 10:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:00.208 10:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.208 10:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.208 10:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.208 10:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.208 10:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.208 10:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.208 10:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.208 10:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.208 10:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.208 10:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.208 10:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.208 10:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.208 10:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.466 10:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.466 "name": "raid_bdev1", 00:09:00.466 "uuid": "d3c59fe1-a766-4d17-ab96-dd89db2d234b", 00:09:00.466 "strip_size_kb": 64, 00:09:00.466 "state": "online", 00:09:00.466 "raid_level": "raid0", 00:09:00.466 "superblock": true, 00:09:00.466 "num_base_bdevs": 3, 00:09:00.466 "num_base_bdevs_discovered": 3, 00:09:00.466 "num_base_bdevs_operational": 3, 00:09:00.466 "base_bdevs_list": [ 00:09:00.466 { 00:09:00.466 "name": "BaseBdev1", 00:09:00.466 "uuid": "c4284496-2d5d-53d8-b05e-6d94ca1c0450", 00:09:00.466 "is_configured": true, 00:09:00.466 "data_offset": 2048, 00:09:00.466 "data_size": 63488 00:09:00.466 }, 00:09:00.466 { 00:09:00.466 "name": "BaseBdev2", 00:09:00.466 "uuid": "166a0a5d-4606-51b1-9191-3a45a08c8e38", 00:09:00.466 "is_configured": true, 00:09:00.466 "data_offset": 2048, 00:09:00.466 "data_size": 63488 00:09:00.466 }, 00:09:00.466 { 00:09:00.466 "name": "BaseBdev3", 00:09:00.466 "uuid": "cb089e06-4c41-574f-85e6-db7f641c5021", 00:09:00.466 "is_configured": true, 00:09:00.466 "data_offset": 2048, 00:09:00.466 "data_size": 63488 00:09:00.466 } 00:09:00.466 ] 00:09:00.466 }' 00:09:00.466 10:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.466 10:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.724 10:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:00.724 10:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:00.982 [2024-11-15 10:54:07.671413] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:01.914 10:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:01.914 10:54:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.914 10:54:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.914 10:54:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.914 10:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:01.914 10:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:01.914 10:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:01.914 10:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:01.914 10:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:01.914 10:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:01.914 10:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.914 10:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.914 10:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.914 10:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.914 10:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.914 10:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.914 10:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.914 10:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.914 10:54:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.914 10:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:01.914 10:54:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.914 10:54:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.914 10:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.914 "name": "raid_bdev1", 00:09:01.914 "uuid": "d3c59fe1-a766-4d17-ab96-dd89db2d234b", 00:09:01.914 "strip_size_kb": 64, 00:09:01.914 "state": "online", 00:09:01.914 "raid_level": "raid0", 00:09:01.914 "superblock": true, 00:09:01.914 "num_base_bdevs": 3, 00:09:01.914 "num_base_bdevs_discovered": 3, 00:09:01.914 "num_base_bdevs_operational": 3, 00:09:01.914 "base_bdevs_list": [ 00:09:01.914 { 00:09:01.914 "name": "BaseBdev1", 00:09:01.914 "uuid": "c4284496-2d5d-53d8-b05e-6d94ca1c0450", 00:09:01.914 "is_configured": true, 00:09:01.914 "data_offset": 2048, 00:09:01.914 "data_size": 63488 00:09:01.914 }, 00:09:01.914 { 00:09:01.914 "name": "BaseBdev2", 00:09:01.914 "uuid": "166a0a5d-4606-51b1-9191-3a45a08c8e38", 00:09:01.914 "is_configured": true, 00:09:01.914 "data_offset": 2048, 00:09:01.914 "data_size": 63488 00:09:01.914 }, 00:09:01.914 { 00:09:01.914 "name": "BaseBdev3", 00:09:01.914 "uuid": "cb089e06-4c41-574f-85e6-db7f641c5021", 00:09:01.914 "is_configured": true, 00:09:01.914 "data_offset": 2048, 00:09:01.914 "data_size": 63488 00:09:01.914 } 00:09:01.914 ] 00:09:01.914 }' 00:09:01.914 10:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.914 10:54:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.172 10:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:02.172 10:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.172 10:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.172 [2024-11-15 10:54:09.056039] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:02.172 [2024-11-15 10:54:09.056075] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:02.172 [2024-11-15 10:54:09.059044] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:02.172 [2024-11-15 10:54:09.059094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.172 [2024-11-15 10:54:09.059135] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:02.172 [2024-11-15 10:54:09.059151] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:02.172 { 00:09:02.172 "results": [ 00:09:02.172 { 00:09:02.172 "job": "raid_bdev1", 00:09:02.172 "core_mask": "0x1", 00:09:02.172 "workload": "randrw", 00:09:02.172 "percentage": 50, 00:09:02.172 "status": "finished", 00:09:02.172 "queue_depth": 1, 00:09:02.172 "io_size": 131072, 00:09:02.172 "runtime": 1.385203, 00:09:02.172 "iops": 15072.159098702501, 00:09:02.172 "mibps": 1884.0198873378126, 00:09:02.172 "io_failed": 1, 00:09:02.172 "io_timeout": 0, 00:09:02.172 "avg_latency_us": 92.0825964368201, 00:09:02.172 "min_latency_us": 19.786899563318777, 00:09:02.172 "max_latency_us": 1595.4724890829693 00:09:02.172 } 00:09:02.172 ], 00:09:02.172 "core_count": 1 00:09:02.172 } 00:09:02.172 10:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.172 10:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65610 00:09:02.172 10:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 65610 ']' 00:09:02.172 10:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 65610 00:09:02.172 10:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:09:02.172 10:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:02.172 10:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65610 00:09:02.430 10:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:02.430 10:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:02.430 10:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65610' 00:09:02.430 killing process with pid 65610 00:09:02.430 10:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 65610 00:09:02.430 [2024-11-15 10:54:09.107323] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:02.430 10:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 65610 00:09:02.430 [2024-11-15 10:54:09.346294] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:03.803 10:54:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QIqlQ64VD1 00:09:03.803 10:54:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:03.803 10:54:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:03.803 10:54:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:03.803 10:54:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:03.803 10:54:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:03.803 10:54:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:03.803 10:54:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:03.803 00:09:03.803 real 0m4.712s 00:09:03.803 user 0m5.669s 00:09:03.803 sys 0m0.614s 00:09:03.803 10:54:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:03.803 10:54:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.803 ************************************ 00:09:03.803 END TEST raid_write_error_test 00:09:03.803 ************************************ 00:09:03.803 10:54:10 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:03.803 10:54:10 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:03.803 10:54:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:03.803 10:54:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:03.803 10:54:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:03.803 ************************************ 00:09:03.803 START TEST raid_state_function_test 00:09:03.803 ************************************ 00:09:03.803 10:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 false 00:09:03.803 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:03.803 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:03.803 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:03.803 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:03.803 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:03.803 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.803 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:03.803 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:03.803 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.803 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:03.803 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:03.803 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.803 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:03.803 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:03.803 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.803 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:03.803 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:03.803 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:03.803 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:03.804 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:03.804 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:03.804 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:03.804 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:03.804 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:03.804 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:03.804 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:03.804 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65754 00:09:03.804 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:03.804 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65754' 00:09:03.804 Process raid pid: 65754 00:09:03.804 10:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65754 00:09:03.804 10:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 65754 ']' 00:09:03.804 10:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.804 10:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:03.804 10:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.804 10:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:03.804 10:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.062 [2024-11-15 10:54:10.754565] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:09:04.062 [2024-11-15 10:54:10.754746] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.063 [2024-11-15 10:54:10.954252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.321 [2024-11-15 10:54:11.081592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.579 [2024-11-15 10:54:11.295456] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.579 [2024-11-15 10:54:11.295507] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.836 10:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:04.836 10:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:09:04.836 10:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:04.836 10:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.836 10:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.836 [2024-11-15 10:54:11.697450] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:04.836 [2024-11-15 10:54:11.697509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:04.836 [2024-11-15 10:54:11.697520] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:04.836 [2024-11-15 10:54:11.697530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:04.836 [2024-11-15 10:54:11.697536] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:04.836 [2024-11-15 10:54:11.697545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:04.836 10:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.836 10:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:04.836 10:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.836 10:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.836 10:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.836 10:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.836 10:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.836 10:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.836 10:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.836 10:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.836 10:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.836 10:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.836 10:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.836 10:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.836 10:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.836 10:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.836 10:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.836 "name": "Existed_Raid", 00:09:04.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.836 "strip_size_kb": 64, 00:09:04.836 "state": "configuring", 00:09:04.836 "raid_level": "concat", 00:09:04.836 "superblock": false, 00:09:04.836 "num_base_bdevs": 3, 00:09:04.836 "num_base_bdevs_discovered": 0, 00:09:04.836 "num_base_bdevs_operational": 3, 00:09:04.836 "base_bdevs_list": [ 00:09:04.836 { 00:09:04.836 "name": "BaseBdev1", 00:09:04.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.836 "is_configured": false, 00:09:04.836 "data_offset": 0, 00:09:04.836 "data_size": 0 00:09:04.836 }, 00:09:04.836 { 00:09:04.836 "name": "BaseBdev2", 00:09:04.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.836 "is_configured": false, 00:09:04.836 "data_offset": 0, 00:09:04.836 "data_size": 0 00:09:04.836 }, 00:09:04.836 { 00:09:04.836 "name": "BaseBdev3", 00:09:04.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.836 "is_configured": false, 00:09:04.836 "data_offset": 0, 00:09:04.836 "data_size": 0 00:09:04.836 } 00:09:04.836 ] 00:09:04.836 }' 00:09:04.836 10:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.836 10:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.402 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.402 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.402 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.402 [2024-11-15 10:54:12.172627] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.402 [2024-11-15 10:54:12.172675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:05.402 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.402 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:05.402 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.402 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.402 [2024-11-15 10:54:12.184590] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:05.402 [2024-11-15 10:54:12.184643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:05.402 [2024-11-15 10:54:12.184654] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.402 [2024-11-15 10:54:12.184665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.402 [2024-11-15 10:54:12.184672] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:05.402 [2024-11-15 10:54:12.184681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:05.402 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.402 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:05.402 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.402 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.402 [2024-11-15 10:54:12.232562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.402 BaseBdev1 00:09:05.402 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.402 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:05.402 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:05.402 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:05.402 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:05.402 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:05.402 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:05.402 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:05.402 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.402 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.402 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.403 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:05.403 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.403 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.403 [ 00:09:05.403 { 00:09:05.403 "name": "BaseBdev1", 00:09:05.403 "aliases": [ 00:09:05.403 "5e02278e-940f-4658-8eda-6542fdc60067" 00:09:05.403 ], 00:09:05.403 "product_name": "Malloc disk", 00:09:05.403 "block_size": 512, 00:09:05.403 "num_blocks": 65536, 00:09:05.403 "uuid": "5e02278e-940f-4658-8eda-6542fdc60067", 00:09:05.403 "assigned_rate_limits": { 00:09:05.403 "rw_ios_per_sec": 0, 00:09:05.403 "rw_mbytes_per_sec": 0, 00:09:05.403 "r_mbytes_per_sec": 0, 00:09:05.403 "w_mbytes_per_sec": 0 00:09:05.403 }, 00:09:05.403 "claimed": true, 00:09:05.403 "claim_type": "exclusive_write", 00:09:05.403 "zoned": false, 00:09:05.403 "supported_io_types": { 00:09:05.403 "read": true, 00:09:05.403 "write": true, 00:09:05.403 "unmap": true, 00:09:05.403 "flush": true, 00:09:05.403 "reset": true, 00:09:05.403 "nvme_admin": false, 00:09:05.403 "nvme_io": false, 00:09:05.403 "nvme_io_md": false, 00:09:05.403 "write_zeroes": true, 00:09:05.403 "zcopy": true, 00:09:05.403 "get_zone_info": false, 00:09:05.403 "zone_management": false, 00:09:05.403 "zone_append": false, 00:09:05.403 "compare": false, 00:09:05.403 "compare_and_write": false, 00:09:05.403 "abort": true, 00:09:05.403 "seek_hole": false, 00:09:05.403 "seek_data": false, 00:09:05.403 "copy": true, 00:09:05.403 "nvme_iov_md": false 00:09:05.403 }, 00:09:05.403 "memory_domains": [ 00:09:05.403 { 00:09:05.403 "dma_device_id": "system", 00:09:05.403 "dma_device_type": 1 00:09:05.403 }, 00:09:05.403 { 00:09:05.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.403 "dma_device_type": 2 00:09:05.403 } 00:09:05.403 ], 00:09:05.403 "driver_specific": {} 00:09:05.403 } 00:09:05.403 ] 00:09:05.403 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.403 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:05.403 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:05.403 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.403 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.403 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.403 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.403 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.403 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.403 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.403 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.403 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.403 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.403 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.403 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.403 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.403 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.403 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.403 "name": "Existed_Raid", 00:09:05.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.403 "strip_size_kb": 64, 00:09:05.403 "state": "configuring", 00:09:05.403 "raid_level": "concat", 00:09:05.403 "superblock": false, 00:09:05.403 "num_base_bdevs": 3, 00:09:05.403 "num_base_bdevs_discovered": 1, 00:09:05.403 "num_base_bdevs_operational": 3, 00:09:05.403 "base_bdevs_list": [ 00:09:05.403 { 00:09:05.403 "name": "BaseBdev1", 00:09:05.403 "uuid": "5e02278e-940f-4658-8eda-6542fdc60067", 00:09:05.403 "is_configured": true, 00:09:05.403 "data_offset": 0, 00:09:05.403 "data_size": 65536 00:09:05.403 }, 00:09:05.403 { 00:09:05.403 "name": "BaseBdev2", 00:09:05.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.403 "is_configured": false, 00:09:05.403 "data_offset": 0, 00:09:05.403 "data_size": 0 00:09:05.403 }, 00:09:05.403 { 00:09:05.403 "name": "BaseBdev3", 00:09:05.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.403 "is_configured": false, 00:09:05.403 "data_offset": 0, 00:09:05.403 "data_size": 0 00:09:05.403 } 00:09:05.403 ] 00:09:05.403 }' 00:09:05.403 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.403 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.970 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.970 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.970 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.970 [2024-11-15 10:54:12.683967] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.970 [2024-11-15 10:54:12.684031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:05.970 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.971 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:05.971 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.971 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.971 [2024-11-15 10:54:12.695984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.971 [2024-11-15 10:54:12.697946] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.971 [2024-11-15 10:54:12.697991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.971 [2024-11-15 10:54:12.698001] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:05.971 [2024-11-15 10:54:12.698010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:05.971 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.971 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:05.971 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:05.971 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:05.971 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.971 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.971 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.971 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.971 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.971 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.971 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.971 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.971 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.971 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.971 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.971 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.971 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.971 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.971 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.971 "name": "Existed_Raid", 00:09:05.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.971 "strip_size_kb": 64, 00:09:05.971 "state": "configuring", 00:09:05.971 "raid_level": "concat", 00:09:05.971 "superblock": false, 00:09:05.971 "num_base_bdevs": 3, 00:09:05.971 "num_base_bdevs_discovered": 1, 00:09:05.971 "num_base_bdevs_operational": 3, 00:09:05.971 "base_bdevs_list": [ 00:09:05.971 { 00:09:05.971 "name": "BaseBdev1", 00:09:05.971 "uuid": "5e02278e-940f-4658-8eda-6542fdc60067", 00:09:05.971 "is_configured": true, 00:09:05.971 "data_offset": 0, 00:09:05.971 "data_size": 65536 00:09:05.971 }, 00:09:05.971 { 00:09:05.971 "name": "BaseBdev2", 00:09:05.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.971 "is_configured": false, 00:09:05.971 "data_offset": 0, 00:09:05.971 "data_size": 0 00:09:05.971 }, 00:09:05.971 { 00:09:05.971 "name": "BaseBdev3", 00:09:05.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.971 "is_configured": false, 00:09:05.971 "data_offset": 0, 00:09:05.971 "data_size": 0 00:09:05.971 } 00:09:05.971 ] 00:09:05.971 }' 00:09:05.971 10:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.971 10:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.228 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:06.228 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.228 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.487 [2024-11-15 10:54:13.176695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:06.487 BaseBdev2 00:09:06.487 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.487 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:06.487 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:06.487 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:06.487 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:06.487 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:06.487 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:06.487 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:06.487 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.487 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.487 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.487 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:06.487 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.487 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.487 [ 00:09:06.487 { 00:09:06.487 "name": "BaseBdev2", 00:09:06.487 "aliases": [ 00:09:06.487 "2c0ba3cb-6548-40f9-9e37-a7c55b68b887" 00:09:06.487 ], 00:09:06.487 "product_name": "Malloc disk", 00:09:06.487 "block_size": 512, 00:09:06.487 "num_blocks": 65536, 00:09:06.487 "uuid": "2c0ba3cb-6548-40f9-9e37-a7c55b68b887", 00:09:06.487 "assigned_rate_limits": { 00:09:06.487 "rw_ios_per_sec": 0, 00:09:06.487 "rw_mbytes_per_sec": 0, 00:09:06.487 "r_mbytes_per_sec": 0, 00:09:06.487 "w_mbytes_per_sec": 0 00:09:06.487 }, 00:09:06.487 "claimed": true, 00:09:06.487 "claim_type": "exclusive_write", 00:09:06.487 "zoned": false, 00:09:06.487 "supported_io_types": { 00:09:06.487 "read": true, 00:09:06.487 "write": true, 00:09:06.487 "unmap": true, 00:09:06.487 "flush": true, 00:09:06.487 "reset": true, 00:09:06.487 "nvme_admin": false, 00:09:06.487 "nvme_io": false, 00:09:06.487 "nvme_io_md": false, 00:09:06.487 "write_zeroes": true, 00:09:06.487 "zcopy": true, 00:09:06.487 "get_zone_info": false, 00:09:06.487 "zone_management": false, 00:09:06.487 "zone_append": false, 00:09:06.487 "compare": false, 00:09:06.487 "compare_and_write": false, 00:09:06.487 "abort": true, 00:09:06.487 "seek_hole": false, 00:09:06.487 "seek_data": false, 00:09:06.487 "copy": true, 00:09:06.487 "nvme_iov_md": false 00:09:06.487 }, 00:09:06.487 "memory_domains": [ 00:09:06.487 { 00:09:06.487 "dma_device_id": "system", 00:09:06.487 "dma_device_type": 1 00:09:06.487 }, 00:09:06.487 { 00:09:06.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.487 "dma_device_type": 2 00:09:06.487 } 00:09:06.487 ], 00:09:06.487 "driver_specific": {} 00:09:06.487 } 00:09:06.487 ] 00:09:06.487 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.488 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:06.488 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:06.488 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:06.488 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:06.488 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.488 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.488 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.488 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.488 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.488 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.488 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.488 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.488 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.488 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.488 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.488 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.488 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.488 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.488 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.488 "name": "Existed_Raid", 00:09:06.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.488 "strip_size_kb": 64, 00:09:06.488 "state": "configuring", 00:09:06.488 "raid_level": "concat", 00:09:06.488 "superblock": false, 00:09:06.488 "num_base_bdevs": 3, 00:09:06.488 "num_base_bdevs_discovered": 2, 00:09:06.488 "num_base_bdevs_operational": 3, 00:09:06.488 "base_bdevs_list": [ 00:09:06.488 { 00:09:06.488 "name": "BaseBdev1", 00:09:06.488 "uuid": "5e02278e-940f-4658-8eda-6542fdc60067", 00:09:06.488 "is_configured": true, 00:09:06.488 "data_offset": 0, 00:09:06.488 "data_size": 65536 00:09:06.488 }, 00:09:06.488 { 00:09:06.488 "name": "BaseBdev2", 00:09:06.488 "uuid": "2c0ba3cb-6548-40f9-9e37-a7c55b68b887", 00:09:06.488 "is_configured": true, 00:09:06.488 "data_offset": 0, 00:09:06.488 "data_size": 65536 00:09:06.488 }, 00:09:06.488 { 00:09:06.488 "name": "BaseBdev3", 00:09:06.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.488 "is_configured": false, 00:09:06.488 "data_offset": 0, 00:09:06.488 "data_size": 0 00:09:06.488 } 00:09:06.488 ] 00:09:06.488 }' 00:09:06.488 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.488 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.747 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:06.747 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.747 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.005 [2024-11-15 10:54:13.729651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:07.006 [2024-11-15 10:54:13.729703] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:07.006 [2024-11-15 10:54:13.729716] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:07.006 [2024-11-15 10:54:13.730130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:07.006 [2024-11-15 10:54:13.730324] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:07.006 [2024-11-15 10:54:13.730340] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:07.006 [2024-11-15 10:54:13.730597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.006 BaseBdev3 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.006 [ 00:09:07.006 { 00:09:07.006 "name": "BaseBdev3", 00:09:07.006 "aliases": [ 00:09:07.006 "cd85b5c3-4bae-4842-b152-81d6c408bc0f" 00:09:07.006 ], 00:09:07.006 "product_name": "Malloc disk", 00:09:07.006 "block_size": 512, 00:09:07.006 "num_blocks": 65536, 00:09:07.006 "uuid": "cd85b5c3-4bae-4842-b152-81d6c408bc0f", 00:09:07.006 "assigned_rate_limits": { 00:09:07.006 "rw_ios_per_sec": 0, 00:09:07.006 "rw_mbytes_per_sec": 0, 00:09:07.006 "r_mbytes_per_sec": 0, 00:09:07.006 "w_mbytes_per_sec": 0 00:09:07.006 }, 00:09:07.006 "claimed": true, 00:09:07.006 "claim_type": "exclusive_write", 00:09:07.006 "zoned": false, 00:09:07.006 "supported_io_types": { 00:09:07.006 "read": true, 00:09:07.006 "write": true, 00:09:07.006 "unmap": true, 00:09:07.006 "flush": true, 00:09:07.006 "reset": true, 00:09:07.006 "nvme_admin": false, 00:09:07.006 "nvme_io": false, 00:09:07.006 "nvme_io_md": false, 00:09:07.006 "write_zeroes": true, 00:09:07.006 "zcopy": true, 00:09:07.006 "get_zone_info": false, 00:09:07.006 "zone_management": false, 00:09:07.006 "zone_append": false, 00:09:07.006 "compare": false, 00:09:07.006 "compare_and_write": false, 00:09:07.006 "abort": true, 00:09:07.006 "seek_hole": false, 00:09:07.006 "seek_data": false, 00:09:07.006 "copy": true, 00:09:07.006 "nvme_iov_md": false 00:09:07.006 }, 00:09:07.006 "memory_domains": [ 00:09:07.006 { 00:09:07.006 "dma_device_id": "system", 00:09:07.006 "dma_device_type": 1 00:09:07.006 }, 00:09:07.006 { 00:09:07.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.006 "dma_device_type": 2 00:09:07.006 } 00:09:07.006 ], 00:09:07.006 "driver_specific": {} 00:09:07.006 } 00:09:07.006 ] 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.006 "name": "Existed_Raid", 00:09:07.006 "uuid": "7854dd8f-682c-4e6e-936e-268f8e4a37e3", 00:09:07.006 "strip_size_kb": 64, 00:09:07.006 "state": "online", 00:09:07.006 "raid_level": "concat", 00:09:07.006 "superblock": false, 00:09:07.006 "num_base_bdevs": 3, 00:09:07.006 "num_base_bdevs_discovered": 3, 00:09:07.006 "num_base_bdevs_operational": 3, 00:09:07.006 "base_bdevs_list": [ 00:09:07.006 { 00:09:07.006 "name": "BaseBdev1", 00:09:07.006 "uuid": "5e02278e-940f-4658-8eda-6542fdc60067", 00:09:07.006 "is_configured": true, 00:09:07.006 "data_offset": 0, 00:09:07.006 "data_size": 65536 00:09:07.006 }, 00:09:07.006 { 00:09:07.006 "name": "BaseBdev2", 00:09:07.006 "uuid": "2c0ba3cb-6548-40f9-9e37-a7c55b68b887", 00:09:07.006 "is_configured": true, 00:09:07.006 "data_offset": 0, 00:09:07.006 "data_size": 65536 00:09:07.006 }, 00:09:07.006 { 00:09:07.006 "name": "BaseBdev3", 00:09:07.006 "uuid": "cd85b5c3-4bae-4842-b152-81d6c408bc0f", 00:09:07.006 "is_configured": true, 00:09:07.006 "data_offset": 0, 00:09:07.006 "data_size": 65536 00:09:07.006 } 00:09:07.006 ] 00:09:07.006 }' 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.006 10:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.574 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:07.574 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:07.574 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:07.574 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:07.574 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:07.574 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:07.574 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:07.574 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:07.574 10:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.574 10:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.574 [2024-11-15 10:54:14.241351] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:07.574 10:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.574 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:07.574 "name": "Existed_Raid", 00:09:07.574 "aliases": [ 00:09:07.574 "7854dd8f-682c-4e6e-936e-268f8e4a37e3" 00:09:07.574 ], 00:09:07.574 "product_name": "Raid Volume", 00:09:07.574 "block_size": 512, 00:09:07.574 "num_blocks": 196608, 00:09:07.574 "uuid": "7854dd8f-682c-4e6e-936e-268f8e4a37e3", 00:09:07.574 "assigned_rate_limits": { 00:09:07.574 "rw_ios_per_sec": 0, 00:09:07.574 "rw_mbytes_per_sec": 0, 00:09:07.574 "r_mbytes_per_sec": 0, 00:09:07.574 "w_mbytes_per_sec": 0 00:09:07.574 }, 00:09:07.574 "claimed": false, 00:09:07.574 "zoned": false, 00:09:07.574 "supported_io_types": { 00:09:07.574 "read": true, 00:09:07.574 "write": true, 00:09:07.574 "unmap": true, 00:09:07.574 "flush": true, 00:09:07.574 "reset": true, 00:09:07.574 "nvme_admin": false, 00:09:07.574 "nvme_io": false, 00:09:07.574 "nvme_io_md": false, 00:09:07.575 "write_zeroes": true, 00:09:07.575 "zcopy": false, 00:09:07.575 "get_zone_info": false, 00:09:07.575 "zone_management": false, 00:09:07.575 "zone_append": false, 00:09:07.575 "compare": false, 00:09:07.575 "compare_and_write": false, 00:09:07.575 "abort": false, 00:09:07.575 "seek_hole": false, 00:09:07.575 "seek_data": false, 00:09:07.575 "copy": false, 00:09:07.575 "nvme_iov_md": false 00:09:07.575 }, 00:09:07.575 "memory_domains": [ 00:09:07.575 { 00:09:07.575 "dma_device_id": "system", 00:09:07.575 "dma_device_type": 1 00:09:07.575 }, 00:09:07.575 { 00:09:07.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.575 "dma_device_type": 2 00:09:07.575 }, 00:09:07.575 { 00:09:07.575 "dma_device_id": "system", 00:09:07.575 "dma_device_type": 1 00:09:07.575 }, 00:09:07.575 { 00:09:07.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.575 "dma_device_type": 2 00:09:07.575 }, 00:09:07.575 { 00:09:07.575 "dma_device_id": "system", 00:09:07.575 "dma_device_type": 1 00:09:07.575 }, 00:09:07.575 { 00:09:07.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.575 "dma_device_type": 2 00:09:07.575 } 00:09:07.575 ], 00:09:07.575 "driver_specific": { 00:09:07.575 "raid": { 00:09:07.575 "uuid": "7854dd8f-682c-4e6e-936e-268f8e4a37e3", 00:09:07.575 "strip_size_kb": 64, 00:09:07.575 "state": "online", 00:09:07.575 "raid_level": "concat", 00:09:07.575 "superblock": false, 00:09:07.575 "num_base_bdevs": 3, 00:09:07.575 "num_base_bdevs_discovered": 3, 00:09:07.575 "num_base_bdevs_operational": 3, 00:09:07.575 "base_bdevs_list": [ 00:09:07.575 { 00:09:07.575 "name": "BaseBdev1", 00:09:07.575 "uuid": "5e02278e-940f-4658-8eda-6542fdc60067", 00:09:07.575 "is_configured": true, 00:09:07.575 "data_offset": 0, 00:09:07.575 "data_size": 65536 00:09:07.575 }, 00:09:07.575 { 00:09:07.575 "name": "BaseBdev2", 00:09:07.575 "uuid": "2c0ba3cb-6548-40f9-9e37-a7c55b68b887", 00:09:07.575 "is_configured": true, 00:09:07.575 "data_offset": 0, 00:09:07.575 "data_size": 65536 00:09:07.575 }, 00:09:07.575 { 00:09:07.575 "name": "BaseBdev3", 00:09:07.575 "uuid": "cd85b5c3-4bae-4842-b152-81d6c408bc0f", 00:09:07.575 "is_configured": true, 00:09:07.575 "data_offset": 0, 00:09:07.575 "data_size": 65536 00:09:07.575 } 00:09:07.575 ] 00:09:07.575 } 00:09:07.575 } 00:09:07.575 }' 00:09:07.575 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:07.575 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:07.575 BaseBdev2 00:09:07.575 BaseBdev3' 00:09:07.575 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.575 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:07.575 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.575 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.575 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:07.575 10:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.575 10:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.575 10:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.575 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.575 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.575 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.575 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.575 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:07.575 10:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.575 10:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.575 10:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.575 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.575 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.575 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.575 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.575 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:07.575 10:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.575 10:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.575 10:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.834 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.834 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.834 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:07.835 10:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.835 10:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.835 [2024-11-15 10:54:14.516520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:07.835 [2024-11-15 10:54:14.516554] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:07.835 [2024-11-15 10:54:14.516615] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.835 10:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.835 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:07.835 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:07.835 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:07.835 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:07.835 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:07.835 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:07.835 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.835 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:07.835 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.835 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.835 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:07.835 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.835 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.835 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.835 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.835 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.835 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.835 10:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.835 10:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.835 10:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.835 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.835 "name": "Existed_Raid", 00:09:07.835 "uuid": "7854dd8f-682c-4e6e-936e-268f8e4a37e3", 00:09:07.835 "strip_size_kb": 64, 00:09:07.835 "state": "offline", 00:09:07.835 "raid_level": "concat", 00:09:07.835 "superblock": false, 00:09:07.835 "num_base_bdevs": 3, 00:09:07.835 "num_base_bdevs_discovered": 2, 00:09:07.835 "num_base_bdevs_operational": 2, 00:09:07.835 "base_bdevs_list": [ 00:09:07.835 { 00:09:07.835 "name": null, 00:09:07.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.835 "is_configured": false, 00:09:07.835 "data_offset": 0, 00:09:07.835 "data_size": 65536 00:09:07.835 }, 00:09:07.835 { 00:09:07.835 "name": "BaseBdev2", 00:09:07.835 "uuid": "2c0ba3cb-6548-40f9-9e37-a7c55b68b887", 00:09:07.835 "is_configured": true, 00:09:07.835 "data_offset": 0, 00:09:07.835 "data_size": 65536 00:09:07.835 }, 00:09:07.835 { 00:09:07.835 "name": "BaseBdev3", 00:09:07.835 "uuid": "cd85b5c3-4bae-4842-b152-81d6c408bc0f", 00:09:07.835 "is_configured": true, 00:09:07.835 "data_offset": 0, 00:09:07.835 "data_size": 65536 00:09:07.835 } 00:09:07.835 ] 00:09:07.835 }' 00:09:07.835 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.835 10:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.093 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:08.093 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.093 10:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.093 10:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.093 10:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.093 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:08.093 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.351 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:08.351 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:08.351 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:08.351 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.351 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.351 [2024-11-15 10:54:15.050688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:08.351 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.351 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:08.351 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.351 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:08.351 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.351 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.351 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.351 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.351 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:08.351 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:08.351 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:08.351 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.351 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.351 [2024-11-15 10:54:15.210843] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:08.351 [2024-11-15 10:54:15.210902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.610 BaseBdev2 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.610 [ 00:09:08.610 { 00:09:08.610 "name": "BaseBdev2", 00:09:08.610 "aliases": [ 00:09:08.610 "445ccd17-0c83-41ff-a4a0-8d010669353a" 00:09:08.610 ], 00:09:08.610 "product_name": "Malloc disk", 00:09:08.610 "block_size": 512, 00:09:08.610 "num_blocks": 65536, 00:09:08.610 "uuid": "445ccd17-0c83-41ff-a4a0-8d010669353a", 00:09:08.610 "assigned_rate_limits": { 00:09:08.610 "rw_ios_per_sec": 0, 00:09:08.610 "rw_mbytes_per_sec": 0, 00:09:08.610 "r_mbytes_per_sec": 0, 00:09:08.610 "w_mbytes_per_sec": 0 00:09:08.610 }, 00:09:08.610 "claimed": false, 00:09:08.610 "zoned": false, 00:09:08.610 "supported_io_types": { 00:09:08.610 "read": true, 00:09:08.610 "write": true, 00:09:08.610 "unmap": true, 00:09:08.610 "flush": true, 00:09:08.610 "reset": true, 00:09:08.610 "nvme_admin": false, 00:09:08.610 "nvme_io": false, 00:09:08.610 "nvme_io_md": false, 00:09:08.610 "write_zeroes": true, 00:09:08.610 "zcopy": true, 00:09:08.610 "get_zone_info": false, 00:09:08.610 "zone_management": false, 00:09:08.610 "zone_append": false, 00:09:08.610 "compare": false, 00:09:08.610 "compare_and_write": false, 00:09:08.610 "abort": true, 00:09:08.610 "seek_hole": false, 00:09:08.610 "seek_data": false, 00:09:08.610 "copy": true, 00:09:08.610 "nvme_iov_md": false 00:09:08.610 }, 00:09:08.610 "memory_domains": [ 00:09:08.610 { 00:09:08.610 "dma_device_id": "system", 00:09:08.610 "dma_device_type": 1 00:09:08.610 }, 00:09:08.610 { 00:09:08.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.610 "dma_device_type": 2 00:09:08.610 } 00:09:08.610 ], 00:09:08.610 "driver_specific": {} 00:09:08.610 } 00:09:08.610 ] 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.610 BaseBdev3 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.610 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.610 [ 00:09:08.610 { 00:09:08.610 "name": "BaseBdev3", 00:09:08.610 "aliases": [ 00:09:08.610 "dc5cbc1e-92df-461d-92fa-17eff0a93f86" 00:09:08.610 ], 00:09:08.610 "product_name": "Malloc disk", 00:09:08.610 "block_size": 512, 00:09:08.610 "num_blocks": 65536, 00:09:08.610 "uuid": "dc5cbc1e-92df-461d-92fa-17eff0a93f86", 00:09:08.610 "assigned_rate_limits": { 00:09:08.610 "rw_ios_per_sec": 0, 00:09:08.610 "rw_mbytes_per_sec": 0, 00:09:08.610 "r_mbytes_per_sec": 0, 00:09:08.610 "w_mbytes_per_sec": 0 00:09:08.610 }, 00:09:08.610 "claimed": false, 00:09:08.610 "zoned": false, 00:09:08.611 "supported_io_types": { 00:09:08.611 "read": true, 00:09:08.611 "write": true, 00:09:08.611 "unmap": true, 00:09:08.611 "flush": true, 00:09:08.611 "reset": true, 00:09:08.611 "nvme_admin": false, 00:09:08.611 "nvme_io": false, 00:09:08.611 "nvme_io_md": false, 00:09:08.611 "write_zeroes": true, 00:09:08.611 "zcopy": true, 00:09:08.611 "get_zone_info": false, 00:09:08.611 "zone_management": false, 00:09:08.611 "zone_append": false, 00:09:08.611 "compare": false, 00:09:08.611 "compare_and_write": false, 00:09:08.611 "abort": true, 00:09:08.611 "seek_hole": false, 00:09:08.611 "seek_data": false, 00:09:08.611 "copy": true, 00:09:08.611 "nvme_iov_md": false 00:09:08.611 }, 00:09:08.611 "memory_domains": [ 00:09:08.611 { 00:09:08.611 "dma_device_id": "system", 00:09:08.611 "dma_device_type": 1 00:09:08.611 }, 00:09:08.611 { 00:09:08.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.611 "dma_device_type": 2 00:09:08.611 } 00:09:08.611 ], 00:09:08.611 "driver_specific": {} 00:09:08.611 } 00:09:08.611 ] 00:09:08.611 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.611 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:08.611 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:08.611 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:08.611 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:08.611 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.611 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.870 [2024-11-15 10:54:15.536105] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.870 [2024-11-15 10:54:15.536159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.870 [2024-11-15 10:54:15.536186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:08.870 [2024-11-15 10:54:15.538285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:08.870 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.870 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:08.870 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.871 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.871 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.871 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.871 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.871 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.871 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.871 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.871 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.871 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.871 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.871 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.871 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.871 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.871 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.871 "name": "Existed_Raid", 00:09:08.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.871 "strip_size_kb": 64, 00:09:08.871 "state": "configuring", 00:09:08.871 "raid_level": "concat", 00:09:08.871 "superblock": false, 00:09:08.871 "num_base_bdevs": 3, 00:09:08.871 "num_base_bdevs_discovered": 2, 00:09:08.871 "num_base_bdevs_operational": 3, 00:09:08.871 "base_bdevs_list": [ 00:09:08.871 { 00:09:08.871 "name": "BaseBdev1", 00:09:08.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.871 "is_configured": false, 00:09:08.871 "data_offset": 0, 00:09:08.871 "data_size": 0 00:09:08.871 }, 00:09:08.871 { 00:09:08.871 "name": "BaseBdev2", 00:09:08.871 "uuid": "445ccd17-0c83-41ff-a4a0-8d010669353a", 00:09:08.871 "is_configured": true, 00:09:08.871 "data_offset": 0, 00:09:08.871 "data_size": 65536 00:09:08.871 }, 00:09:08.871 { 00:09:08.871 "name": "BaseBdev3", 00:09:08.871 "uuid": "dc5cbc1e-92df-461d-92fa-17eff0a93f86", 00:09:08.871 "is_configured": true, 00:09:08.871 "data_offset": 0, 00:09:08.871 "data_size": 65536 00:09:08.871 } 00:09:08.871 ] 00:09:08.871 }' 00:09:08.871 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.871 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.131 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:09.131 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.131 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.131 [2024-11-15 10:54:15.951416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:09.131 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.131 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:09.131 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.131 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.131 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.131 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.131 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.131 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.131 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.131 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.131 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.131 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.131 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.131 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.131 10:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.131 10:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.131 10:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.131 "name": "Existed_Raid", 00:09:09.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.131 "strip_size_kb": 64, 00:09:09.131 "state": "configuring", 00:09:09.131 "raid_level": "concat", 00:09:09.131 "superblock": false, 00:09:09.131 "num_base_bdevs": 3, 00:09:09.131 "num_base_bdevs_discovered": 1, 00:09:09.131 "num_base_bdevs_operational": 3, 00:09:09.131 "base_bdevs_list": [ 00:09:09.131 { 00:09:09.131 "name": "BaseBdev1", 00:09:09.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.131 "is_configured": false, 00:09:09.131 "data_offset": 0, 00:09:09.131 "data_size": 0 00:09:09.131 }, 00:09:09.131 { 00:09:09.131 "name": null, 00:09:09.131 "uuid": "445ccd17-0c83-41ff-a4a0-8d010669353a", 00:09:09.131 "is_configured": false, 00:09:09.131 "data_offset": 0, 00:09:09.131 "data_size": 65536 00:09:09.131 }, 00:09:09.131 { 00:09:09.131 "name": "BaseBdev3", 00:09:09.131 "uuid": "dc5cbc1e-92df-461d-92fa-17eff0a93f86", 00:09:09.131 "is_configured": true, 00:09:09.131 "data_offset": 0, 00:09:09.131 "data_size": 65536 00:09:09.131 } 00:09:09.131 ] 00:09:09.131 }' 00:09:09.131 10:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.131 10:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.700 10:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.700 10:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.701 [2024-11-15 10:54:16.493854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.701 BaseBdev1 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.701 [ 00:09:09.701 { 00:09:09.701 "name": "BaseBdev1", 00:09:09.701 "aliases": [ 00:09:09.701 "a023a858-0de7-44f7-bf54-e314547a68a5" 00:09:09.701 ], 00:09:09.701 "product_name": "Malloc disk", 00:09:09.701 "block_size": 512, 00:09:09.701 "num_blocks": 65536, 00:09:09.701 "uuid": "a023a858-0de7-44f7-bf54-e314547a68a5", 00:09:09.701 "assigned_rate_limits": { 00:09:09.701 "rw_ios_per_sec": 0, 00:09:09.701 "rw_mbytes_per_sec": 0, 00:09:09.701 "r_mbytes_per_sec": 0, 00:09:09.701 "w_mbytes_per_sec": 0 00:09:09.701 }, 00:09:09.701 "claimed": true, 00:09:09.701 "claim_type": "exclusive_write", 00:09:09.701 "zoned": false, 00:09:09.701 "supported_io_types": { 00:09:09.701 "read": true, 00:09:09.701 "write": true, 00:09:09.701 "unmap": true, 00:09:09.701 "flush": true, 00:09:09.701 "reset": true, 00:09:09.701 "nvme_admin": false, 00:09:09.701 "nvme_io": false, 00:09:09.701 "nvme_io_md": false, 00:09:09.701 "write_zeroes": true, 00:09:09.701 "zcopy": true, 00:09:09.701 "get_zone_info": false, 00:09:09.701 "zone_management": false, 00:09:09.701 "zone_append": false, 00:09:09.701 "compare": false, 00:09:09.701 "compare_and_write": false, 00:09:09.701 "abort": true, 00:09:09.701 "seek_hole": false, 00:09:09.701 "seek_data": false, 00:09:09.701 "copy": true, 00:09:09.701 "nvme_iov_md": false 00:09:09.701 }, 00:09:09.701 "memory_domains": [ 00:09:09.701 { 00:09:09.701 "dma_device_id": "system", 00:09:09.701 "dma_device_type": 1 00:09:09.701 }, 00:09:09.701 { 00:09:09.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.701 "dma_device_type": 2 00:09:09.701 } 00:09:09.701 ], 00:09:09.701 "driver_specific": {} 00:09:09.701 } 00:09:09.701 ] 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.701 "name": "Existed_Raid", 00:09:09.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.701 "strip_size_kb": 64, 00:09:09.701 "state": "configuring", 00:09:09.701 "raid_level": "concat", 00:09:09.701 "superblock": false, 00:09:09.701 "num_base_bdevs": 3, 00:09:09.701 "num_base_bdevs_discovered": 2, 00:09:09.701 "num_base_bdevs_operational": 3, 00:09:09.701 "base_bdevs_list": [ 00:09:09.701 { 00:09:09.701 "name": "BaseBdev1", 00:09:09.701 "uuid": "a023a858-0de7-44f7-bf54-e314547a68a5", 00:09:09.701 "is_configured": true, 00:09:09.701 "data_offset": 0, 00:09:09.701 "data_size": 65536 00:09:09.701 }, 00:09:09.701 { 00:09:09.701 "name": null, 00:09:09.701 "uuid": "445ccd17-0c83-41ff-a4a0-8d010669353a", 00:09:09.701 "is_configured": false, 00:09:09.701 "data_offset": 0, 00:09:09.701 "data_size": 65536 00:09:09.701 }, 00:09:09.701 { 00:09:09.701 "name": "BaseBdev3", 00:09:09.701 "uuid": "dc5cbc1e-92df-461d-92fa-17eff0a93f86", 00:09:09.701 "is_configured": true, 00:09:09.701 "data_offset": 0, 00:09:09.701 "data_size": 65536 00:09:09.701 } 00:09:09.701 ] 00:09:09.701 }' 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.701 10:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.269 10:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:10.269 10:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.269 10:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.269 10:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.269 10:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.269 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:10.269 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:10.269 10:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.269 10:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.269 [2024-11-15 10:54:17.009054] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:10.269 10:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.269 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:10.269 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.269 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.269 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.269 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.269 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.269 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.269 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.269 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.269 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.269 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.269 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.269 10:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.269 10:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.269 10:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.269 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.269 "name": "Existed_Raid", 00:09:10.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.269 "strip_size_kb": 64, 00:09:10.269 "state": "configuring", 00:09:10.269 "raid_level": "concat", 00:09:10.269 "superblock": false, 00:09:10.269 "num_base_bdevs": 3, 00:09:10.269 "num_base_bdevs_discovered": 1, 00:09:10.269 "num_base_bdevs_operational": 3, 00:09:10.269 "base_bdevs_list": [ 00:09:10.269 { 00:09:10.269 "name": "BaseBdev1", 00:09:10.269 "uuid": "a023a858-0de7-44f7-bf54-e314547a68a5", 00:09:10.269 "is_configured": true, 00:09:10.269 "data_offset": 0, 00:09:10.269 "data_size": 65536 00:09:10.269 }, 00:09:10.269 { 00:09:10.269 "name": null, 00:09:10.269 "uuid": "445ccd17-0c83-41ff-a4a0-8d010669353a", 00:09:10.269 "is_configured": false, 00:09:10.269 "data_offset": 0, 00:09:10.269 "data_size": 65536 00:09:10.269 }, 00:09:10.269 { 00:09:10.269 "name": null, 00:09:10.269 "uuid": "dc5cbc1e-92df-461d-92fa-17eff0a93f86", 00:09:10.269 "is_configured": false, 00:09:10.269 "data_offset": 0, 00:09:10.269 "data_size": 65536 00:09:10.269 } 00:09:10.269 ] 00:09:10.269 }' 00:09:10.269 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.269 10:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.903 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:10.903 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.903 10:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.903 10:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.903 10:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.903 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:10.904 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:10.904 10:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.904 10:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.904 [2024-11-15 10:54:17.488305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.904 10:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.904 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:10.904 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.904 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.904 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.904 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.904 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.904 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.904 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.904 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.904 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.904 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.904 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.904 10:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.904 10:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.904 10:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.904 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.904 "name": "Existed_Raid", 00:09:10.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.904 "strip_size_kb": 64, 00:09:10.904 "state": "configuring", 00:09:10.904 "raid_level": "concat", 00:09:10.904 "superblock": false, 00:09:10.904 "num_base_bdevs": 3, 00:09:10.904 "num_base_bdevs_discovered": 2, 00:09:10.904 "num_base_bdevs_operational": 3, 00:09:10.904 "base_bdevs_list": [ 00:09:10.904 { 00:09:10.904 "name": "BaseBdev1", 00:09:10.904 "uuid": "a023a858-0de7-44f7-bf54-e314547a68a5", 00:09:10.904 "is_configured": true, 00:09:10.904 "data_offset": 0, 00:09:10.904 "data_size": 65536 00:09:10.904 }, 00:09:10.904 { 00:09:10.904 "name": null, 00:09:10.904 "uuid": "445ccd17-0c83-41ff-a4a0-8d010669353a", 00:09:10.904 "is_configured": false, 00:09:10.904 "data_offset": 0, 00:09:10.904 "data_size": 65536 00:09:10.904 }, 00:09:10.904 { 00:09:10.904 "name": "BaseBdev3", 00:09:10.904 "uuid": "dc5cbc1e-92df-461d-92fa-17eff0a93f86", 00:09:10.904 "is_configured": true, 00:09:10.904 "data_offset": 0, 00:09:10.904 "data_size": 65536 00:09:10.904 } 00:09:10.904 ] 00:09:10.904 }' 00:09:10.904 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.904 10:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.164 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.164 10:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.164 10:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:11.164 10:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.164 10:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.164 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:11.164 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:11.164 10:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.164 10:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.164 [2024-11-15 10:54:18.015459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:11.421 10:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.421 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:11.421 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.422 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.422 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.422 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.422 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.422 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.422 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.422 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.422 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.422 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.422 10:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.422 10:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.422 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.422 10:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.422 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.422 "name": "Existed_Raid", 00:09:11.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.422 "strip_size_kb": 64, 00:09:11.422 "state": "configuring", 00:09:11.422 "raid_level": "concat", 00:09:11.422 "superblock": false, 00:09:11.422 "num_base_bdevs": 3, 00:09:11.422 "num_base_bdevs_discovered": 1, 00:09:11.422 "num_base_bdevs_operational": 3, 00:09:11.422 "base_bdevs_list": [ 00:09:11.422 { 00:09:11.422 "name": null, 00:09:11.422 "uuid": "a023a858-0de7-44f7-bf54-e314547a68a5", 00:09:11.422 "is_configured": false, 00:09:11.422 "data_offset": 0, 00:09:11.422 "data_size": 65536 00:09:11.422 }, 00:09:11.422 { 00:09:11.422 "name": null, 00:09:11.422 "uuid": "445ccd17-0c83-41ff-a4a0-8d010669353a", 00:09:11.422 "is_configured": false, 00:09:11.422 "data_offset": 0, 00:09:11.422 "data_size": 65536 00:09:11.422 }, 00:09:11.422 { 00:09:11.422 "name": "BaseBdev3", 00:09:11.422 "uuid": "dc5cbc1e-92df-461d-92fa-17eff0a93f86", 00:09:11.422 "is_configured": true, 00:09:11.422 "data_offset": 0, 00:09:11.422 "data_size": 65536 00:09:11.422 } 00:09:11.422 ] 00:09:11.422 }' 00:09:11.422 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.422 10:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.989 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.989 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:11.989 10:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.989 10:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.989 10:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.989 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:11.989 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:11.989 10:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.989 10:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.989 [2024-11-15 10:54:18.679705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:11.989 10:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.989 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:11.989 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.989 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.989 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.989 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.989 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.989 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.989 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.989 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.989 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.989 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.989 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.989 10:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.989 10:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.989 10:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.989 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.989 "name": "Existed_Raid", 00:09:11.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.989 "strip_size_kb": 64, 00:09:11.989 "state": "configuring", 00:09:11.989 "raid_level": "concat", 00:09:11.989 "superblock": false, 00:09:11.989 "num_base_bdevs": 3, 00:09:11.989 "num_base_bdevs_discovered": 2, 00:09:11.989 "num_base_bdevs_operational": 3, 00:09:11.989 "base_bdevs_list": [ 00:09:11.989 { 00:09:11.989 "name": null, 00:09:11.989 "uuid": "a023a858-0de7-44f7-bf54-e314547a68a5", 00:09:11.989 "is_configured": false, 00:09:11.989 "data_offset": 0, 00:09:11.989 "data_size": 65536 00:09:11.989 }, 00:09:11.989 { 00:09:11.990 "name": "BaseBdev2", 00:09:11.990 "uuid": "445ccd17-0c83-41ff-a4a0-8d010669353a", 00:09:11.990 "is_configured": true, 00:09:11.990 "data_offset": 0, 00:09:11.990 "data_size": 65536 00:09:11.990 }, 00:09:11.990 { 00:09:11.990 "name": "BaseBdev3", 00:09:11.990 "uuid": "dc5cbc1e-92df-461d-92fa-17eff0a93f86", 00:09:11.990 "is_configured": true, 00:09:11.990 "data_offset": 0, 00:09:11.990 "data_size": 65536 00:09:11.990 } 00:09:11.990 ] 00:09:11.990 }' 00:09:11.990 10:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.990 10:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.249 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.249 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.249 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:12.249 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.249 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.508 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:12.508 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:12.508 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.508 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.508 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.508 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a023a858-0de7-44f7-bf54-e314547a68a5 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.509 [2024-11-15 10:54:19.257525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:12.509 [2024-11-15 10:54:19.257577] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:12.509 [2024-11-15 10:54:19.257587] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:12.509 [2024-11-15 10:54:19.257846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:12.509 [2024-11-15 10:54:19.258024] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:12.509 [2024-11-15 10:54:19.258041] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:12.509 [2024-11-15 10:54:19.258339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.509 NewBaseBdev 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.509 [ 00:09:12.509 { 00:09:12.509 "name": "NewBaseBdev", 00:09:12.509 "aliases": [ 00:09:12.509 "a023a858-0de7-44f7-bf54-e314547a68a5" 00:09:12.509 ], 00:09:12.509 "product_name": "Malloc disk", 00:09:12.509 "block_size": 512, 00:09:12.509 "num_blocks": 65536, 00:09:12.509 "uuid": "a023a858-0de7-44f7-bf54-e314547a68a5", 00:09:12.509 "assigned_rate_limits": { 00:09:12.509 "rw_ios_per_sec": 0, 00:09:12.509 "rw_mbytes_per_sec": 0, 00:09:12.509 "r_mbytes_per_sec": 0, 00:09:12.509 "w_mbytes_per_sec": 0 00:09:12.509 }, 00:09:12.509 "claimed": true, 00:09:12.509 "claim_type": "exclusive_write", 00:09:12.509 "zoned": false, 00:09:12.509 "supported_io_types": { 00:09:12.509 "read": true, 00:09:12.509 "write": true, 00:09:12.509 "unmap": true, 00:09:12.509 "flush": true, 00:09:12.509 "reset": true, 00:09:12.509 "nvme_admin": false, 00:09:12.509 "nvme_io": false, 00:09:12.509 "nvme_io_md": false, 00:09:12.509 "write_zeroes": true, 00:09:12.509 "zcopy": true, 00:09:12.509 "get_zone_info": false, 00:09:12.509 "zone_management": false, 00:09:12.509 "zone_append": false, 00:09:12.509 "compare": false, 00:09:12.509 "compare_and_write": false, 00:09:12.509 "abort": true, 00:09:12.509 "seek_hole": false, 00:09:12.509 "seek_data": false, 00:09:12.509 "copy": true, 00:09:12.509 "nvme_iov_md": false 00:09:12.509 }, 00:09:12.509 "memory_domains": [ 00:09:12.509 { 00:09:12.509 "dma_device_id": "system", 00:09:12.509 "dma_device_type": 1 00:09:12.509 }, 00:09:12.509 { 00:09:12.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.509 "dma_device_type": 2 00:09:12.509 } 00:09:12.509 ], 00:09:12.509 "driver_specific": {} 00:09:12.509 } 00:09:12.509 ] 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.509 "name": "Existed_Raid", 00:09:12.509 "uuid": "691e1853-d0cd-4af6-b702-39303aa02f0b", 00:09:12.509 "strip_size_kb": 64, 00:09:12.509 "state": "online", 00:09:12.509 "raid_level": "concat", 00:09:12.509 "superblock": false, 00:09:12.509 "num_base_bdevs": 3, 00:09:12.509 "num_base_bdevs_discovered": 3, 00:09:12.509 "num_base_bdevs_operational": 3, 00:09:12.509 "base_bdevs_list": [ 00:09:12.509 { 00:09:12.509 "name": "NewBaseBdev", 00:09:12.509 "uuid": "a023a858-0de7-44f7-bf54-e314547a68a5", 00:09:12.509 "is_configured": true, 00:09:12.509 "data_offset": 0, 00:09:12.509 "data_size": 65536 00:09:12.509 }, 00:09:12.509 { 00:09:12.509 "name": "BaseBdev2", 00:09:12.509 "uuid": "445ccd17-0c83-41ff-a4a0-8d010669353a", 00:09:12.509 "is_configured": true, 00:09:12.509 "data_offset": 0, 00:09:12.509 "data_size": 65536 00:09:12.509 }, 00:09:12.509 { 00:09:12.509 "name": "BaseBdev3", 00:09:12.509 "uuid": "dc5cbc1e-92df-461d-92fa-17eff0a93f86", 00:09:12.509 "is_configured": true, 00:09:12.509 "data_offset": 0, 00:09:12.509 "data_size": 65536 00:09:12.509 } 00:09:12.509 ] 00:09:12.509 }' 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.509 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.079 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:13.079 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:13.079 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:13.079 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:13.079 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:13.079 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:13.079 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:13.079 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:13.079 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.079 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.079 [2024-11-15 10:54:19.765062] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:13.079 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.079 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:13.079 "name": "Existed_Raid", 00:09:13.079 "aliases": [ 00:09:13.079 "691e1853-d0cd-4af6-b702-39303aa02f0b" 00:09:13.079 ], 00:09:13.079 "product_name": "Raid Volume", 00:09:13.079 "block_size": 512, 00:09:13.079 "num_blocks": 196608, 00:09:13.079 "uuid": "691e1853-d0cd-4af6-b702-39303aa02f0b", 00:09:13.079 "assigned_rate_limits": { 00:09:13.079 "rw_ios_per_sec": 0, 00:09:13.079 "rw_mbytes_per_sec": 0, 00:09:13.079 "r_mbytes_per_sec": 0, 00:09:13.079 "w_mbytes_per_sec": 0 00:09:13.079 }, 00:09:13.080 "claimed": false, 00:09:13.080 "zoned": false, 00:09:13.080 "supported_io_types": { 00:09:13.080 "read": true, 00:09:13.080 "write": true, 00:09:13.080 "unmap": true, 00:09:13.080 "flush": true, 00:09:13.080 "reset": true, 00:09:13.080 "nvme_admin": false, 00:09:13.080 "nvme_io": false, 00:09:13.080 "nvme_io_md": false, 00:09:13.080 "write_zeroes": true, 00:09:13.080 "zcopy": false, 00:09:13.080 "get_zone_info": false, 00:09:13.080 "zone_management": false, 00:09:13.080 "zone_append": false, 00:09:13.080 "compare": false, 00:09:13.080 "compare_and_write": false, 00:09:13.080 "abort": false, 00:09:13.080 "seek_hole": false, 00:09:13.080 "seek_data": false, 00:09:13.080 "copy": false, 00:09:13.080 "nvme_iov_md": false 00:09:13.080 }, 00:09:13.080 "memory_domains": [ 00:09:13.080 { 00:09:13.080 "dma_device_id": "system", 00:09:13.080 "dma_device_type": 1 00:09:13.080 }, 00:09:13.080 { 00:09:13.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.080 "dma_device_type": 2 00:09:13.080 }, 00:09:13.080 { 00:09:13.080 "dma_device_id": "system", 00:09:13.080 "dma_device_type": 1 00:09:13.080 }, 00:09:13.080 { 00:09:13.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.080 "dma_device_type": 2 00:09:13.080 }, 00:09:13.080 { 00:09:13.080 "dma_device_id": "system", 00:09:13.080 "dma_device_type": 1 00:09:13.080 }, 00:09:13.080 { 00:09:13.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.080 "dma_device_type": 2 00:09:13.080 } 00:09:13.080 ], 00:09:13.080 "driver_specific": { 00:09:13.080 "raid": { 00:09:13.080 "uuid": "691e1853-d0cd-4af6-b702-39303aa02f0b", 00:09:13.080 "strip_size_kb": 64, 00:09:13.080 "state": "online", 00:09:13.080 "raid_level": "concat", 00:09:13.080 "superblock": false, 00:09:13.080 "num_base_bdevs": 3, 00:09:13.080 "num_base_bdevs_discovered": 3, 00:09:13.080 "num_base_bdevs_operational": 3, 00:09:13.080 "base_bdevs_list": [ 00:09:13.080 { 00:09:13.080 "name": "NewBaseBdev", 00:09:13.080 "uuid": "a023a858-0de7-44f7-bf54-e314547a68a5", 00:09:13.080 "is_configured": true, 00:09:13.080 "data_offset": 0, 00:09:13.080 "data_size": 65536 00:09:13.080 }, 00:09:13.080 { 00:09:13.080 "name": "BaseBdev2", 00:09:13.080 "uuid": "445ccd17-0c83-41ff-a4a0-8d010669353a", 00:09:13.080 "is_configured": true, 00:09:13.080 "data_offset": 0, 00:09:13.080 "data_size": 65536 00:09:13.080 }, 00:09:13.080 { 00:09:13.080 "name": "BaseBdev3", 00:09:13.080 "uuid": "dc5cbc1e-92df-461d-92fa-17eff0a93f86", 00:09:13.080 "is_configured": true, 00:09:13.080 "data_offset": 0, 00:09:13.080 "data_size": 65536 00:09:13.080 } 00:09:13.080 ] 00:09:13.080 } 00:09:13.080 } 00:09:13.080 }' 00:09:13.080 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:13.080 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:13.080 BaseBdev2 00:09:13.080 BaseBdev3' 00:09:13.080 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.080 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:13.080 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.080 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.080 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:13.080 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.080 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.080 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.080 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.080 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.080 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.080 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.080 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:13.080 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.080 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.080 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.080 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.080 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.080 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.080 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:13.080 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.080 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.080 10:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.080 10:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.080 10:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.080 10:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.080 10:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:13.080 10:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.080 10:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.339 [2024-11-15 10:54:20.008363] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:13.339 [2024-11-15 10:54:20.008400] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:13.339 [2024-11-15 10:54:20.008492] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:13.339 [2024-11-15 10:54:20.008565] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:13.339 [2024-11-15 10:54:20.008578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:13.339 10:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.339 10:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65754 00:09:13.339 10:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 65754 ']' 00:09:13.339 10:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 65754 00:09:13.339 10:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:09:13.339 10:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:13.339 10:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65754 00:09:13.339 10:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:13.339 10:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:13.339 killing process with pid 65754 00:09:13.339 10:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65754' 00:09:13.339 10:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 65754 00:09:13.339 [2024-11-15 10:54:20.060243] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:13.339 10:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 65754 00:09:13.599 [2024-11-15 10:54:20.370831] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:14.979 00:09:14.979 real 0m10.879s 00:09:14.979 user 0m17.280s 00:09:14.979 sys 0m1.937s 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.979 ************************************ 00:09:14.979 END TEST raid_state_function_test 00:09:14.979 ************************************ 00:09:14.979 10:54:21 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:14.979 10:54:21 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:14.979 10:54:21 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:14.979 10:54:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:14.979 ************************************ 00:09:14.979 START TEST raid_state_function_test_sb 00:09:14.979 ************************************ 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 true 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66379 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66379' 00:09:14.979 Process raid pid: 66379 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66379 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 66379 ']' 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:14.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:14.979 10:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.979 [2024-11-15 10:54:21.677069] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:09:14.979 [2024-11-15 10:54:21.677205] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.979 [2024-11-15 10:54:21.855743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.238 [2024-11-15 10:54:21.979774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.496 [2024-11-15 10:54:22.204840] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:15.496 [2024-11-15 10:54:22.204896] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:15.755 10:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:15.755 10:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:09:15.755 10:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:15.755 10:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.755 10:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.755 [2024-11-15 10:54:22.540539] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:15.755 [2024-11-15 10:54:22.540594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:15.755 [2024-11-15 10:54:22.540607] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:15.755 [2024-11-15 10:54:22.540618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:15.755 [2024-11-15 10:54:22.540625] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:15.755 [2024-11-15 10:54:22.540635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:15.755 10:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.755 10:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:15.755 10:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.755 10:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.755 10:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.755 10:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.755 10:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.755 10:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.755 10:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.755 10:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.755 10:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.755 10:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.755 10:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.755 10:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.755 10:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.755 10:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.755 10:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.755 "name": "Existed_Raid", 00:09:15.755 "uuid": "3a11fde3-a45e-4446-8baa-f546555e5c2c", 00:09:15.755 "strip_size_kb": 64, 00:09:15.755 "state": "configuring", 00:09:15.755 "raid_level": "concat", 00:09:15.755 "superblock": true, 00:09:15.755 "num_base_bdevs": 3, 00:09:15.755 "num_base_bdevs_discovered": 0, 00:09:15.755 "num_base_bdevs_operational": 3, 00:09:15.755 "base_bdevs_list": [ 00:09:15.755 { 00:09:15.755 "name": "BaseBdev1", 00:09:15.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.755 "is_configured": false, 00:09:15.755 "data_offset": 0, 00:09:15.755 "data_size": 0 00:09:15.755 }, 00:09:15.755 { 00:09:15.755 "name": "BaseBdev2", 00:09:15.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.755 "is_configured": false, 00:09:15.755 "data_offset": 0, 00:09:15.755 "data_size": 0 00:09:15.755 }, 00:09:15.755 { 00:09:15.755 "name": "BaseBdev3", 00:09:15.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.755 "is_configured": false, 00:09:15.755 "data_offset": 0, 00:09:15.755 "data_size": 0 00:09:15.755 } 00:09:15.755 ] 00:09:15.755 }' 00:09:15.755 10:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.755 10:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.325 10:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:16.325 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.325 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.325 [2024-11-15 10:54:23.007759] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:16.325 [2024-11-15 10:54:23.007813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:16.325 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.325 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:16.325 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.325 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.325 [2024-11-15 10:54:23.019764] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:16.325 [2024-11-15 10:54:23.019815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:16.325 [2024-11-15 10:54:23.019826] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:16.325 [2024-11-15 10:54:23.019836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:16.325 [2024-11-15 10:54:23.019843] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:16.325 [2024-11-15 10:54:23.019853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:16.325 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.325 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:16.325 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.325 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.325 [2024-11-15 10:54:23.068640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:16.325 BaseBdev1 00:09:16.325 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.325 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:16.325 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:16.325 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:16.325 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:16.325 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:16.325 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:16.325 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:16.325 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.325 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.325 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.325 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:16.325 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.325 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.325 [ 00:09:16.325 { 00:09:16.325 "name": "BaseBdev1", 00:09:16.325 "aliases": [ 00:09:16.325 "21aeb8fa-f15c-4f1f-ac5b-1839e3e4658c" 00:09:16.325 ], 00:09:16.325 "product_name": "Malloc disk", 00:09:16.325 "block_size": 512, 00:09:16.325 "num_blocks": 65536, 00:09:16.325 "uuid": "21aeb8fa-f15c-4f1f-ac5b-1839e3e4658c", 00:09:16.325 "assigned_rate_limits": { 00:09:16.325 "rw_ios_per_sec": 0, 00:09:16.325 "rw_mbytes_per_sec": 0, 00:09:16.325 "r_mbytes_per_sec": 0, 00:09:16.325 "w_mbytes_per_sec": 0 00:09:16.325 }, 00:09:16.325 "claimed": true, 00:09:16.325 "claim_type": "exclusive_write", 00:09:16.325 "zoned": false, 00:09:16.325 "supported_io_types": { 00:09:16.325 "read": true, 00:09:16.325 "write": true, 00:09:16.325 "unmap": true, 00:09:16.325 "flush": true, 00:09:16.325 "reset": true, 00:09:16.325 "nvme_admin": false, 00:09:16.325 "nvme_io": false, 00:09:16.325 "nvme_io_md": false, 00:09:16.325 "write_zeroes": true, 00:09:16.325 "zcopy": true, 00:09:16.325 "get_zone_info": false, 00:09:16.325 "zone_management": false, 00:09:16.325 "zone_append": false, 00:09:16.325 "compare": false, 00:09:16.325 "compare_and_write": false, 00:09:16.325 "abort": true, 00:09:16.325 "seek_hole": false, 00:09:16.326 "seek_data": false, 00:09:16.326 "copy": true, 00:09:16.326 "nvme_iov_md": false 00:09:16.326 }, 00:09:16.326 "memory_domains": [ 00:09:16.326 { 00:09:16.326 "dma_device_id": "system", 00:09:16.326 "dma_device_type": 1 00:09:16.326 }, 00:09:16.326 { 00:09:16.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.326 "dma_device_type": 2 00:09:16.326 } 00:09:16.326 ], 00:09:16.326 "driver_specific": {} 00:09:16.326 } 00:09:16.326 ] 00:09:16.326 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.326 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:16.326 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:16.326 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.326 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.326 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:16.326 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.326 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.326 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.326 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.326 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.326 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.326 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.326 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.326 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.326 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.326 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.326 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.326 "name": "Existed_Raid", 00:09:16.326 "uuid": "97343ee8-2970-41da-99cc-9759508979a3", 00:09:16.326 "strip_size_kb": 64, 00:09:16.326 "state": "configuring", 00:09:16.326 "raid_level": "concat", 00:09:16.326 "superblock": true, 00:09:16.326 "num_base_bdevs": 3, 00:09:16.326 "num_base_bdevs_discovered": 1, 00:09:16.326 "num_base_bdevs_operational": 3, 00:09:16.326 "base_bdevs_list": [ 00:09:16.326 { 00:09:16.326 "name": "BaseBdev1", 00:09:16.326 "uuid": "21aeb8fa-f15c-4f1f-ac5b-1839e3e4658c", 00:09:16.326 "is_configured": true, 00:09:16.326 "data_offset": 2048, 00:09:16.326 "data_size": 63488 00:09:16.326 }, 00:09:16.326 { 00:09:16.326 "name": "BaseBdev2", 00:09:16.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.326 "is_configured": false, 00:09:16.326 "data_offset": 0, 00:09:16.326 "data_size": 0 00:09:16.326 }, 00:09:16.326 { 00:09:16.326 "name": "BaseBdev3", 00:09:16.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.326 "is_configured": false, 00:09:16.326 "data_offset": 0, 00:09:16.326 "data_size": 0 00:09:16.326 } 00:09:16.326 ] 00:09:16.326 }' 00:09:16.326 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.326 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.895 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:16.895 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.895 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.895 [2024-11-15 10:54:23.527989] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:16.895 [2024-11-15 10:54:23.528053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:16.895 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.895 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:16.895 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.895 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.895 [2024-11-15 10:54:23.540026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:16.895 [2024-11-15 10:54:23.541835] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:16.895 [2024-11-15 10:54:23.541878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:16.895 [2024-11-15 10:54:23.541888] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:16.895 [2024-11-15 10:54:23.541897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:16.895 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.895 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:16.895 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:16.895 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:16.895 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.895 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.895 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:16.895 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.895 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.895 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.895 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.895 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.895 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.895 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.895 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.895 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.895 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.895 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.895 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.895 "name": "Existed_Raid", 00:09:16.895 "uuid": "9c06b58f-b50d-412e-8e71-7563ff0dc82f", 00:09:16.895 "strip_size_kb": 64, 00:09:16.895 "state": "configuring", 00:09:16.895 "raid_level": "concat", 00:09:16.895 "superblock": true, 00:09:16.895 "num_base_bdevs": 3, 00:09:16.895 "num_base_bdevs_discovered": 1, 00:09:16.895 "num_base_bdevs_operational": 3, 00:09:16.895 "base_bdevs_list": [ 00:09:16.895 { 00:09:16.895 "name": "BaseBdev1", 00:09:16.895 "uuid": "21aeb8fa-f15c-4f1f-ac5b-1839e3e4658c", 00:09:16.895 "is_configured": true, 00:09:16.895 "data_offset": 2048, 00:09:16.895 "data_size": 63488 00:09:16.895 }, 00:09:16.895 { 00:09:16.895 "name": "BaseBdev2", 00:09:16.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.895 "is_configured": false, 00:09:16.895 "data_offset": 0, 00:09:16.895 "data_size": 0 00:09:16.895 }, 00:09:16.895 { 00:09:16.895 "name": "BaseBdev3", 00:09:16.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.895 "is_configured": false, 00:09:16.895 "data_offset": 0, 00:09:16.895 "data_size": 0 00:09:16.895 } 00:09:16.895 ] 00:09:16.895 }' 00:09:16.895 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.895 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.155 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:17.155 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.155 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.155 [2024-11-15 10:54:23.979725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.155 BaseBdev2 00:09:17.155 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.155 10:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:17.155 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:17.155 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:17.155 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:17.155 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:17.155 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:17.155 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:17.155 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.155 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.155 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.155 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:17.155 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.155 10:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.155 [ 00:09:17.155 { 00:09:17.155 "name": "BaseBdev2", 00:09:17.155 "aliases": [ 00:09:17.155 "0d31d2cf-55e4-485a-92a6-2cb482c957a4" 00:09:17.155 ], 00:09:17.155 "product_name": "Malloc disk", 00:09:17.155 "block_size": 512, 00:09:17.155 "num_blocks": 65536, 00:09:17.155 "uuid": "0d31d2cf-55e4-485a-92a6-2cb482c957a4", 00:09:17.155 "assigned_rate_limits": { 00:09:17.155 "rw_ios_per_sec": 0, 00:09:17.155 "rw_mbytes_per_sec": 0, 00:09:17.155 "r_mbytes_per_sec": 0, 00:09:17.155 "w_mbytes_per_sec": 0 00:09:17.155 }, 00:09:17.155 "claimed": true, 00:09:17.155 "claim_type": "exclusive_write", 00:09:17.155 "zoned": false, 00:09:17.155 "supported_io_types": { 00:09:17.155 "read": true, 00:09:17.155 "write": true, 00:09:17.155 "unmap": true, 00:09:17.155 "flush": true, 00:09:17.155 "reset": true, 00:09:17.155 "nvme_admin": false, 00:09:17.155 "nvme_io": false, 00:09:17.155 "nvme_io_md": false, 00:09:17.155 "write_zeroes": true, 00:09:17.155 "zcopy": true, 00:09:17.155 "get_zone_info": false, 00:09:17.155 "zone_management": false, 00:09:17.155 "zone_append": false, 00:09:17.155 "compare": false, 00:09:17.155 "compare_and_write": false, 00:09:17.155 "abort": true, 00:09:17.155 "seek_hole": false, 00:09:17.155 "seek_data": false, 00:09:17.155 "copy": true, 00:09:17.155 "nvme_iov_md": false 00:09:17.155 }, 00:09:17.155 "memory_domains": [ 00:09:17.155 { 00:09:17.155 "dma_device_id": "system", 00:09:17.155 "dma_device_type": 1 00:09:17.155 }, 00:09:17.155 { 00:09:17.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.155 "dma_device_type": 2 00:09:17.155 } 00:09:17.155 ], 00:09:17.155 "driver_specific": {} 00:09:17.155 } 00:09:17.155 ] 00:09:17.155 10:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.155 10:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:17.155 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:17.155 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:17.155 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:17.155 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.155 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.155 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.155 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.155 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.155 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.155 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.155 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.155 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.155 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.155 10:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.155 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.155 10:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.155 10:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.415 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.415 "name": "Existed_Raid", 00:09:17.415 "uuid": "9c06b58f-b50d-412e-8e71-7563ff0dc82f", 00:09:17.415 "strip_size_kb": 64, 00:09:17.415 "state": "configuring", 00:09:17.415 "raid_level": "concat", 00:09:17.415 "superblock": true, 00:09:17.415 "num_base_bdevs": 3, 00:09:17.415 "num_base_bdevs_discovered": 2, 00:09:17.415 "num_base_bdevs_operational": 3, 00:09:17.415 "base_bdevs_list": [ 00:09:17.415 { 00:09:17.415 "name": "BaseBdev1", 00:09:17.415 "uuid": "21aeb8fa-f15c-4f1f-ac5b-1839e3e4658c", 00:09:17.415 "is_configured": true, 00:09:17.415 "data_offset": 2048, 00:09:17.415 "data_size": 63488 00:09:17.415 }, 00:09:17.415 { 00:09:17.415 "name": "BaseBdev2", 00:09:17.415 "uuid": "0d31d2cf-55e4-485a-92a6-2cb482c957a4", 00:09:17.415 "is_configured": true, 00:09:17.415 "data_offset": 2048, 00:09:17.415 "data_size": 63488 00:09:17.415 }, 00:09:17.415 { 00:09:17.415 "name": "BaseBdev3", 00:09:17.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.415 "is_configured": false, 00:09:17.415 "data_offset": 0, 00:09:17.415 "data_size": 0 00:09:17.415 } 00:09:17.415 ] 00:09:17.415 }' 00:09:17.415 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.415 10:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.675 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:17.675 10:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.675 10:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.675 [2024-11-15 10:54:24.494908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:17.675 [2024-11-15 10:54:24.495182] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:17.675 [2024-11-15 10:54:24.495206] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:17.675 [2024-11-15 10:54:24.495537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:17.675 [2024-11-15 10:54:24.495715] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:17.675 [2024-11-15 10:54:24.495733] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:17.675 BaseBdev3 00:09:17.675 [2024-11-15 10:54:24.495913] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.675 10:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.675 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:17.675 10:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:17.675 10:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:17.675 10:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:17.675 10:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:17.675 10:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:17.675 10:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:17.675 10:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.675 10:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.675 10:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.675 10:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:17.675 10:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.675 10:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.675 [ 00:09:17.675 { 00:09:17.675 "name": "BaseBdev3", 00:09:17.675 "aliases": [ 00:09:17.675 "1fe691db-4ac3-4e6b-b396-6cfac60a99c8" 00:09:17.675 ], 00:09:17.675 "product_name": "Malloc disk", 00:09:17.675 "block_size": 512, 00:09:17.675 "num_blocks": 65536, 00:09:17.675 "uuid": "1fe691db-4ac3-4e6b-b396-6cfac60a99c8", 00:09:17.675 "assigned_rate_limits": { 00:09:17.675 "rw_ios_per_sec": 0, 00:09:17.675 "rw_mbytes_per_sec": 0, 00:09:17.675 "r_mbytes_per_sec": 0, 00:09:17.675 "w_mbytes_per_sec": 0 00:09:17.675 }, 00:09:17.675 "claimed": true, 00:09:17.675 "claim_type": "exclusive_write", 00:09:17.675 "zoned": false, 00:09:17.675 "supported_io_types": { 00:09:17.675 "read": true, 00:09:17.675 "write": true, 00:09:17.675 "unmap": true, 00:09:17.675 "flush": true, 00:09:17.675 "reset": true, 00:09:17.675 "nvme_admin": false, 00:09:17.675 "nvme_io": false, 00:09:17.675 "nvme_io_md": false, 00:09:17.675 "write_zeroes": true, 00:09:17.675 "zcopy": true, 00:09:17.675 "get_zone_info": false, 00:09:17.675 "zone_management": false, 00:09:17.675 "zone_append": false, 00:09:17.675 "compare": false, 00:09:17.675 "compare_and_write": false, 00:09:17.675 "abort": true, 00:09:17.675 "seek_hole": false, 00:09:17.675 "seek_data": false, 00:09:17.675 "copy": true, 00:09:17.675 "nvme_iov_md": false 00:09:17.675 }, 00:09:17.675 "memory_domains": [ 00:09:17.675 { 00:09:17.675 "dma_device_id": "system", 00:09:17.675 "dma_device_type": 1 00:09:17.675 }, 00:09:17.675 { 00:09:17.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.675 "dma_device_type": 2 00:09:17.675 } 00:09:17.675 ], 00:09:17.675 "driver_specific": {} 00:09:17.675 } 00:09:17.675 ] 00:09:17.675 10:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.675 10:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:17.675 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:17.675 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:17.676 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:17.676 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.676 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.676 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.676 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.676 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.676 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.676 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.676 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.676 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.676 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.676 10:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.676 10:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.676 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.676 10:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.676 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.676 "name": "Existed_Raid", 00:09:17.676 "uuid": "9c06b58f-b50d-412e-8e71-7563ff0dc82f", 00:09:17.676 "strip_size_kb": 64, 00:09:17.676 "state": "online", 00:09:17.676 "raid_level": "concat", 00:09:17.676 "superblock": true, 00:09:17.676 "num_base_bdevs": 3, 00:09:17.676 "num_base_bdevs_discovered": 3, 00:09:17.676 "num_base_bdevs_operational": 3, 00:09:17.676 "base_bdevs_list": [ 00:09:17.676 { 00:09:17.676 "name": "BaseBdev1", 00:09:17.676 "uuid": "21aeb8fa-f15c-4f1f-ac5b-1839e3e4658c", 00:09:17.676 "is_configured": true, 00:09:17.676 "data_offset": 2048, 00:09:17.676 "data_size": 63488 00:09:17.676 }, 00:09:17.676 { 00:09:17.676 "name": "BaseBdev2", 00:09:17.676 "uuid": "0d31d2cf-55e4-485a-92a6-2cb482c957a4", 00:09:17.676 "is_configured": true, 00:09:17.676 "data_offset": 2048, 00:09:17.676 "data_size": 63488 00:09:17.676 }, 00:09:17.676 { 00:09:17.676 "name": "BaseBdev3", 00:09:17.676 "uuid": "1fe691db-4ac3-4e6b-b396-6cfac60a99c8", 00:09:17.676 "is_configured": true, 00:09:17.676 "data_offset": 2048, 00:09:17.676 "data_size": 63488 00:09:17.676 } 00:09:17.676 ] 00:09:17.676 }' 00:09:17.676 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.676 10:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.271 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:18.271 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:18.271 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:18.271 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:18.271 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:18.271 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:18.271 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:18.271 10:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.271 10:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.271 10:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:18.271 [2024-11-15 10:54:24.998474] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.271 10:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.271 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:18.271 "name": "Existed_Raid", 00:09:18.271 "aliases": [ 00:09:18.271 "9c06b58f-b50d-412e-8e71-7563ff0dc82f" 00:09:18.271 ], 00:09:18.271 "product_name": "Raid Volume", 00:09:18.271 "block_size": 512, 00:09:18.271 "num_blocks": 190464, 00:09:18.271 "uuid": "9c06b58f-b50d-412e-8e71-7563ff0dc82f", 00:09:18.271 "assigned_rate_limits": { 00:09:18.271 "rw_ios_per_sec": 0, 00:09:18.271 "rw_mbytes_per_sec": 0, 00:09:18.271 "r_mbytes_per_sec": 0, 00:09:18.271 "w_mbytes_per_sec": 0 00:09:18.271 }, 00:09:18.271 "claimed": false, 00:09:18.271 "zoned": false, 00:09:18.271 "supported_io_types": { 00:09:18.271 "read": true, 00:09:18.271 "write": true, 00:09:18.271 "unmap": true, 00:09:18.271 "flush": true, 00:09:18.271 "reset": true, 00:09:18.271 "nvme_admin": false, 00:09:18.271 "nvme_io": false, 00:09:18.271 "nvme_io_md": false, 00:09:18.271 "write_zeroes": true, 00:09:18.271 "zcopy": false, 00:09:18.271 "get_zone_info": false, 00:09:18.271 "zone_management": false, 00:09:18.271 "zone_append": false, 00:09:18.271 "compare": false, 00:09:18.271 "compare_and_write": false, 00:09:18.271 "abort": false, 00:09:18.271 "seek_hole": false, 00:09:18.271 "seek_data": false, 00:09:18.271 "copy": false, 00:09:18.271 "nvme_iov_md": false 00:09:18.271 }, 00:09:18.271 "memory_domains": [ 00:09:18.271 { 00:09:18.271 "dma_device_id": "system", 00:09:18.271 "dma_device_type": 1 00:09:18.271 }, 00:09:18.271 { 00:09:18.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.271 "dma_device_type": 2 00:09:18.271 }, 00:09:18.271 { 00:09:18.271 "dma_device_id": "system", 00:09:18.271 "dma_device_type": 1 00:09:18.271 }, 00:09:18.271 { 00:09:18.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.271 "dma_device_type": 2 00:09:18.271 }, 00:09:18.271 { 00:09:18.271 "dma_device_id": "system", 00:09:18.271 "dma_device_type": 1 00:09:18.271 }, 00:09:18.271 { 00:09:18.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.271 "dma_device_type": 2 00:09:18.271 } 00:09:18.271 ], 00:09:18.271 "driver_specific": { 00:09:18.271 "raid": { 00:09:18.271 "uuid": "9c06b58f-b50d-412e-8e71-7563ff0dc82f", 00:09:18.271 "strip_size_kb": 64, 00:09:18.271 "state": "online", 00:09:18.271 "raid_level": "concat", 00:09:18.271 "superblock": true, 00:09:18.271 "num_base_bdevs": 3, 00:09:18.271 "num_base_bdevs_discovered": 3, 00:09:18.271 "num_base_bdevs_operational": 3, 00:09:18.271 "base_bdevs_list": [ 00:09:18.271 { 00:09:18.271 "name": "BaseBdev1", 00:09:18.271 "uuid": "21aeb8fa-f15c-4f1f-ac5b-1839e3e4658c", 00:09:18.271 "is_configured": true, 00:09:18.271 "data_offset": 2048, 00:09:18.271 "data_size": 63488 00:09:18.271 }, 00:09:18.271 { 00:09:18.271 "name": "BaseBdev2", 00:09:18.271 "uuid": "0d31d2cf-55e4-485a-92a6-2cb482c957a4", 00:09:18.271 "is_configured": true, 00:09:18.271 "data_offset": 2048, 00:09:18.271 "data_size": 63488 00:09:18.271 }, 00:09:18.271 { 00:09:18.271 "name": "BaseBdev3", 00:09:18.271 "uuid": "1fe691db-4ac3-4e6b-b396-6cfac60a99c8", 00:09:18.271 "is_configured": true, 00:09:18.271 "data_offset": 2048, 00:09:18.271 "data_size": 63488 00:09:18.271 } 00:09:18.271 ] 00:09:18.271 } 00:09:18.272 } 00:09:18.272 }' 00:09:18.272 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:18.272 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:18.272 BaseBdev2 00:09:18.272 BaseBdev3' 00:09:18.272 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.272 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:18.272 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.272 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:18.272 10:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.272 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.272 10:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.272 10:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.272 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.272 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.272 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.530 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:18.530 10:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.530 10:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.530 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.530 10:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.530 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.530 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.530 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.530 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:18.530 10:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.530 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.530 10:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.530 10:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.530 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.530 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.530 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:18.530 10:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.530 10:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.530 [2024-11-15 10:54:25.301667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:18.530 [2024-11-15 10:54:25.301760] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:18.530 [2024-11-15 10:54:25.301841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:18.530 10:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.530 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:18.531 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:18.531 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:18.531 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:18.531 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:18.531 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:18.531 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.531 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:18.531 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.531 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.531 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:18.531 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.531 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.531 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.531 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.531 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.531 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.531 10:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.531 10:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.531 10:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.789 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.789 "name": "Existed_Raid", 00:09:18.789 "uuid": "9c06b58f-b50d-412e-8e71-7563ff0dc82f", 00:09:18.789 "strip_size_kb": 64, 00:09:18.789 "state": "offline", 00:09:18.789 "raid_level": "concat", 00:09:18.789 "superblock": true, 00:09:18.789 "num_base_bdevs": 3, 00:09:18.789 "num_base_bdevs_discovered": 2, 00:09:18.789 "num_base_bdevs_operational": 2, 00:09:18.789 "base_bdevs_list": [ 00:09:18.789 { 00:09:18.789 "name": null, 00:09:18.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.789 "is_configured": false, 00:09:18.789 "data_offset": 0, 00:09:18.789 "data_size": 63488 00:09:18.789 }, 00:09:18.789 { 00:09:18.789 "name": "BaseBdev2", 00:09:18.789 "uuid": "0d31d2cf-55e4-485a-92a6-2cb482c957a4", 00:09:18.789 "is_configured": true, 00:09:18.789 "data_offset": 2048, 00:09:18.789 "data_size": 63488 00:09:18.789 }, 00:09:18.789 { 00:09:18.789 "name": "BaseBdev3", 00:09:18.789 "uuid": "1fe691db-4ac3-4e6b-b396-6cfac60a99c8", 00:09:18.789 "is_configured": true, 00:09:18.789 "data_offset": 2048, 00:09:18.789 "data_size": 63488 00:09:18.789 } 00:09:18.789 ] 00:09:18.789 }' 00:09:18.789 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.789 10:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.049 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:19.049 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:19.049 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.049 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:19.049 10:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.049 10:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.049 10:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.049 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:19.049 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:19.049 10:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:19.049 10:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.049 10:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.049 [2024-11-15 10:54:25.917282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:19.309 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.309 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:19.309 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:19.309 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.309 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:19.309 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.309 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.309 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.309 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:19.309 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:19.309 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:19.309 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.309 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.309 [2024-11-15 10:54:26.079934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:19.309 [2024-11-15 10:54:26.079994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:19.309 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.309 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:19.309 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:19.309 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.309 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:19.309 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.309 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.309 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.569 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:19.569 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:19.569 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:19.569 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:19.569 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:19.569 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:19.569 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.569 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.569 BaseBdev2 00:09:19.569 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.569 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:19.569 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:19.569 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:19.569 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:19.569 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:19.569 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:19.569 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:19.569 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.569 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.569 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.569 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:19.569 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.570 [ 00:09:19.570 { 00:09:19.570 "name": "BaseBdev2", 00:09:19.570 "aliases": [ 00:09:19.570 "5dfe4123-797f-471c-9756-7e2b03cf4009" 00:09:19.570 ], 00:09:19.570 "product_name": "Malloc disk", 00:09:19.570 "block_size": 512, 00:09:19.570 "num_blocks": 65536, 00:09:19.570 "uuid": "5dfe4123-797f-471c-9756-7e2b03cf4009", 00:09:19.570 "assigned_rate_limits": { 00:09:19.570 "rw_ios_per_sec": 0, 00:09:19.570 "rw_mbytes_per_sec": 0, 00:09:19.570 "r_mbytes_per_sec": 0, 00:09:19.570 "w_mbytes_per_sec": 0 00:09:19.570 }, 00:09:19.570 "claimed": false, 00:09:19.570 "zoned": false, 00:09:19.570 "supported_io_types": { 00:09:19.570 "read": true, 00:09:19.570 "write": true, 00:09:19.570 "unmap": true, 00:09:19.570 "flush": true, 00:09:19.570 "reset": true, 00:09:19.570 "nvme_admin": false, 00:09:19.570 "nvme_io": false, 00:09:19.570 "nvme_io_md": false, 00:09:19.570 "write_zeroes": true, 00:09:19.570 "zcopy": true, 00:09:19.570 "get_zone_info": false, 00:09:19.570 "zone_management": false, 00:09:19.570 "zone_append": false, 00:09:19.570 "compare": false, 00:09:19.570 "compare_and_write": false, 00:09:19.570 "abort": true, 00:09:19.570 "seek_hole": false, 00:09:19.570 "seek_data": false, 00:09:19.570 "copy": true, 00:09:19.570 "nvme_iov_md": false 00:09:19.570 }, 00:09:19.570 "memory_domains": [ 00:09:19.570 { 00:09:19.570 "dma_device_id": "system", 00:09:19.570 "dma_device_type": 1 00:09:19.570 }, 00:09:19.570 { 00:09:19.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.570 "dma_device_type": 2 00:09:19.570 } 00:09:19.570 ], 00:09:19.570 "driver_specific": {} 00:09:19.570 } 00:09:19.570 ] 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.570 BaseBdev3 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.570 [ 00:09:19.570 { 00:09:19.570 "name": "BaseBdev3", 00:09:19.570 "aliases": [ 00:09:19.570 "8d8283c7-2ad3-4c0e-a424-2ef97254bdad" 00:09:19.570 ], 00:09:19.570 "product_name": "Malloc disk", 00:09:19.570 "block_size": 512, 00:09:19.570 "num_blocks": 65536, 00:09:19.570 "uuid": "8d8283c7-2ad3-4c0e-a424-2ef97254bdad", 00:09:19.570 "assigned_rate_limits": { 00:09:19.570 "rw_ios_per_sec": 0, 00:09:19.570 "rw_mbytes_per_sec": 0, 00:09:19.570 "r_mbytes_per_sec": 0, 00:09:19.570 "w_mbytes_per_sec": 0 00:09:19.570 }, 00:09:19.570 "claimed": false, 00:09:19.570 "zoned": false, 00:09:19.570 "supported_io_types": { 00:09:19.570 "read": true, 00:09:19.570 "write": true, 00:09:19.570 "unmap": true, 00:09:19.570 "flush": true, 00:09:19.570 "reset": true, 00:09:19.570 "nvme_admin": false, 00:09:19.570 "nvme_io": false, 00:09:19.570 "nvme_io_md": false, 00:09:19.570 "write_zeroes": true, 00:09:19.570 "zcopy": true, 00:09:19.570 "get_zone_info": false, 00:09:19.570 "zone_management": false, 00:09:19.570 "zone_append": false, 00:09:19.570 "compare": false, 00:09:19.570 "compare_and_write": false, 00:09:19.570 "abort": true, 00:09:19.570 "seek_hole": false, 00:09:19.570 "seek_data": false, 00:09:19.570 "copy": true, 00:09:19.570 "nvme_iov_md": false 00:09:19.570 }, 00:09:19.570 "memory_domains": [ 00:09:19.570 { 00:09:19.570 "dma_device_id": "system", 00:09:19.570 "dma_device_type": 1 00:09:19.570 }, 00:09:19.570 { 00:09:19.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.570 "dma_device_type": 2 00:09:19.570 } 00:09:19.570 ], 00:09:19.570 "driver_specific": {} 00:09:19.570 } 00:09:19.570 ] 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.570 [2024-11-15 10:54:26.413529] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:19.570 [2024-11-15 10:54:26.413575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:19.570 [2024-11-15 10:54:26.413600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:19.570 [2024-11-15 10:54:26.415593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.570 "name": "Existed_Raid", 00:09:19.570 "uuid": "00921186-b5dd-477d-9d6e-45706bacef1d", 00:09:19.570 "strip_size_kb": 64, 00:09:19.570 "state": "configuring", 00:09:19.570 "raid_level": "concat", 00:09:19.570 "superblock": true, 00:09:19.570 "num_base_bdevs": 3, 00:09:19.570 "num_base_bdevs_discovered": 2, 00:09:19.570 "num_base_bdevs_operational": 3, 00:09:19.570 "base_bdevs_list": [ 00:09:19.570 { 00:09:19.570 "name": "BaseBdev1", 00:09:19.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.570 "is_configured": false, 00:09:19.570 "data_offset": 0, 00:09:19.570 "data_size": 0 00:09:19.570 }, 00:09:19.570 { 00:09:19.570 "name": "BaseBdev2", 00:09:19.570 "uuid": "5dfe4123-797f-471c-9756-7e2b03cf4009", 00:09:19.570 "is_configured": true, 00:09:19.570 "data_offset": 2048, 00:09:19.570 "data_size": 63488 00:09:19.570 }, 00:09:19.570 { 00:09:19.570 "name": "BaseBdev3", 00:09:19.570 "uuid": "8d8283c7-2ad3-4c0e-a424-2ef97254bdad", 00:09:19.570 "is_configured": true, 00:09:19.570 "data_offset": 2048, 00:09:19.570 "data_size": 63488 00:09:19.570 } 00:09:19.570 ] 00:09:19.570 }' 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.570 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.140 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:20.140 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.140 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.140 [2024-11-15 10:54:26.876736] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:20.140 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.140 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:20.140 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.140 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.140 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.140 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.140 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.140 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.140 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.140 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.140 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.140 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.140 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.140 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.140 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.140 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.140 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.140 "name": "Existed_Raid", 00:09:20.140 "uuid": "00921186-b5dd-477d-9d6e-45706bacef1d", 00:09:20.140 "strip_size_kb": 64, 00:09:20.140 "state": "configuring", 00:09:20.140 "raid_level": "concat", 00:09:20.140 "superblock": true, 00:09:20.140 "num_base_bdevs": 3, 00:09:20.140 "num_base_bdevs_discovered": 1, 00:09:20.140 "num_base_bdevs_operational": 3, 00:09:20.140 "base_bdevs_list": [ 00:09:20.140 { 00:09:20.140 "name": "BaseBdev1", 00:09:20.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.140 "is_configured": false, 00:09:20.140 "data_offset": 0, 00:09:20.140 "data_size": 0 00:09:20.140 }, 00:09:20.140 { 00:09:20.140 "name": null, 00:09:20.140 "uuid": "5dfe4123-797f-471c-9756-7e2b03cf4009", 00:09:20.140 "is_configured": false, 00:09:20.140 "data_offset": 0, 00:09:20.140 "data_size": 63488 00:09:20.140 }, 00:09:20.140 { 00:09:20.140 "name": "BaseBdev3", 00:09:20.140 "uuid": "8d8283c7-2ad3-4c0e-a424-2ef97254bdad", 00:09:20.140 "is_configured": true, 00:09:20.140 "data_offset": 2048, 00:09:20.140 "data_size": 63488 00:09:20.140 } 00:09:20.140 ] 00:09:20.140 }' 00:09:20.140 10:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.140 10:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.399 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:20.399 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.399 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.399 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.399 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.658 [2024-11-15 10:54:27.370633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:20.658 BaseBdev1 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.658 [ 00:09:20.658 { 00:09:20.658 "name": "BaseBdev1", 00:09:20.658 "aliases": [ 00:09:20.658 "afa27e0a-e267-47e6-a3e8-9d27beb31a44" 00:09:20.658 ], 00:09:20.658 "product_name": "Malloc disk", 00:09:20.658 "block_size": 512, 00:09:20.658 "num_blocks": 65536, 00:09:20.658 "uuid": "afa27e0a-e267-47e6-a3e8-9d27beb31a44", 00:09:20.658 "assigned_rate_limits": { 00:09:20.658 "rw_ios_per_sec": 0, 00:09:20.658 "rw_mbytes_per_sec": 0, 00:09:20.658 "r_mbytes_per_sec": 0, 00:09:20.658 "w_mbytes_per_sec": 0 00:09:20.658 }, 00:09:20.658 "claimed": true, 00:09:20.658 "claim_type": "exclusive_write", 00:09:20.658 "zoned": false, 00:09:20.658 "supported_io_types": { 00:09:20.658 "read": true, 00:09:20.658 "write": true, 00:09:20.658 "unmap": true, 00:09:20.658 "flush": true, 00:09:20.658 "reset": true, 00:09:20.658 "nvme_admin": false, 00:09:20.658 "nvme_io": false, 00:09:20.658 "nvme_io_md": false, 00:09:20.658 "write_zeroes": true, 00:09:20.658 "zcopy": true, 00:09:20.658 "get_zone_info": false, 00:09:20.658 "zone_management": false, 00:09:20.658 "zone_append": false, 00:09:20.658 "compare": false, 00:09:20.658 "compare_and_write": false, 00:09:20.658 "abort": true, 00:09:20.658 "seek_hole": false, 00:09:20.658 "seek_data": false, 00:09:20.658 "copy": true, 00:09:20.658 "nvme_iov_md": false 00:09:20.658 }, 00:09:20.658 "memory_domains": [ 00:09:20.658 { 00:09:20.658 "dma_device_id": "system", 00:09:20.658 "dma_device_type": 1 00:09:20.658 }, 00:09:20.658 { 00:09:20.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.658 "dma_device_type": 2 00:09:20.658 } 00:09:20.658 ], 00:09:20.658 "driver_specific": {} 00:09:20.658 } 00:09:20.658 ] 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.658 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.658 "name": "Existed_Raid", 00:09:20.658 "uuid": "00921186-b5dd-477d-9d6e-45706bacef1d", 00:09:20.658 "strip_size_kb": 64, 00:09:20.658 "state": "configuring", 00:09:20.658 "raid_level": "concat", 00:09:20.658 "superblock": true, 00:09:20.658 "num_base_bdevs": 3, 00:09:20.658 "num_base_bdevs_discovered": 2, 00:09:20.658 "num_base_bdevs_operational": 3, 00:09:20.658 "base_bdevs_list": [ 00:09:20.658 { 00:09:20.658 "name": "BaseBdev1", 00:09:20.658 "uuid": "afa27e0a-e267-47e6-a3e8-9d27beb31a44", 00:09:20.658 "is_configured": true, 00:09:20.658 "data_offset": 2048, 00:09:20.658 "data_size": 63488 00:09:20.658 }, 00:09:20.658 { 00:09:20.658 "name": null, 00:09:20.658 "uuid": "5dfe4123-797f-471c-9756-7e2b03cf4009", 00:09:20.658 "is_configured": false, 00:09:20.659 "data_offset": 0, 00:09:20.659 "data_size": 63488 00:09:20.659 }, 00:09:20.659 { 00:09:20.659 "name": "BaseBdev3", 00:09:20.659 "uuid": "8d8283c7-2ad3-4c0e-a424-2ef97254bdad", 00:09:20.659 "is_configured": true, 00:09:20.659 "data_offset": 2048, 00:09:20.659 "data_size": 63488 00:09:20.659 } 00:09:20.659 ] 00:09:20.659 }' 00:09:20.659 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.659 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.227 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.227 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.227 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:21.227 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.227 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.227 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:21.227 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:21.227 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.227 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.227 [2024-11-15 10:54:27.901796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:21.227 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.227 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:21.227 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.227 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.227 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:21.227 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.227 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.227 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.227 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.227 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.227 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.227 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.227 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.227 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.227 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.227 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.227 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.227 "name": "Existed_Raid", 00:09:21.227 "uuid": "00921186-b5dd-477d-9d6e-45706bacef1d", 00:09:21.227 "strip_size_kb": 64, 00:09:21.227 "state": "configuring", 00:09:21.227 "raid_level": "concat", 00:09:21.227 "superblock": true, 00:09:21.227 "num_base_bdevs": 3, 00:09:21.227 "num_base_bdevs_discovered": 1, 00:09:21.227 "num_base_bdevs_operational": 3, 00:09:21.227 "base_bdevs_list": [ 00:09:21.227 { 00:09:21.227 "name": "BaseBdev1", 00:09:21.227 "uuid": "afa27e0a-e267-47e6-a3e8-9d27beb31a44", 00:09:21.227 "is_configured": true, 00:09:21.227 "data_offset": 2048, 00:09:21.227 "data_size": 63488 00:09:21.227 }, 00:09:21.227 { 00:09:21.227 "name": null, 00:09:21.227 "uuid": "5dfe4123-797f-471c-9756-7e2b03cf4009", 00:09:21.227 "is_configured": false, 00:09:21.227 "data_offset": 0, 00:09:21.227 "data_size": 63488 00:09:21.227 }, 00:09:21.227 { 00:09:21.227 "name": null, 00:09:21.227 "uuid": "8d8283c7-2ad3-4c0e-a424-2ef97254bdad", 00:09:21.227 "is_configured": false, 00:09:21.227 "data_offset": 0, 00:09:21.227 "data_size": 63488 00:09:21.227 } 00:09:21.227 ] 00:09:21.227 }' 00:09:21.227 10:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.227 10:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.487 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.487 10:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.487 10:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.487 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:21.487 10:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.487 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:21.487 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:21.487 10:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.487 10:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.487 [2024-11-15 10:54:28.396986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:21.487 10:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.487 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:21.487 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.487 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.487 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:21.487 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.487 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.487 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.487 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.487 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.487 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.487 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.487 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.487 10:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.487 10:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.747 10:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.747 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.747 "name": "Existed_Raid", 00:09:21.747 "uuid": "00921186-b5dd-477d-9d6e-45706bacef1d", 00:09:21.747 "strip_size_kb": 64, 00:09:21.747 "state": "configuring", 00:09:21.747 "raid_level": "concat", 00:09:21.747 "superblock": true, 00:09:21.748 "num_base_bdevs": 3, 00:09:21.748 "num_base_bdevs_discovered": 2, 00:09:21.748 "num_base_bdevs_operational": 3, 00:09:21.748 "base_bdevs_list": [ 00:09:21.748 { 00:09:21.748 "name": "BaseBdev1", 00:09:21.748 "uuid": "afa27e0a-e267-47e6-a3e8-9d27beb31a44", 00:09:21.748 "is_configured": true, 00:09:21.748 "data_offset": 2048, 00:09:21.748 "data_size": 63488 00:09:21.748 }, 00:09:21.748 { 00:09:21.748 "name": null, 00:09:21.748 "uuid": "5dfe4123-797f-471c-9756-7e2b03cf4009", 00:09:21.748 "is_configured": false, 00:09:21.748 "data_offset": 0, 00:09:21.748 "data_size": 63488 00:09:21.748 }, 00:09:21.748 { 00:09:21.748 "name": "BaseBdev3", 00:09:21.748 "uuid": "8d8283c7-2ad3-4c0e-a424-2ef97254bdad", 00:09:21.748 "is_configured": true, 00:09:21.748 "data_offset": 2048, 00:09:21.748 "data_size": 63488 00:09:21.748 } 00:09:21.748 ] 00:09:21.748 }' 00:09:21.748 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.748 10:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.007 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:22.007 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.007 10:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.007 10:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.007 10:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.007 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:22.007 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:22.007 10:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.007 10:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.007 [2024-11-15 10:54:28.880203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:22.277 10:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.277 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:22.277 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.277 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.277 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.277 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.277 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.277 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.277 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.277 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.278 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.278 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.278 10:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.278 10:54:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.278 10:54:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.278 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.278 10:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.278 "name": "Existed_Raid", 00:09:22.278 "uuid": "00921186-b5dd-477d-9d6e-45706bacef1d", 00:09:22.278 "strip_size_kb": 64, 00:09:22.278 "state": "configuring", 00:09:22.278 "raid_level": "concat", 00:09:22.278 "superblock": true, 00:09:22.278 "num_base_bdevs": 3, 00:09:22.278 "num_base_bdevs_discovered": 1, 00:09:22.278 "num_base_bdevs_operational": 3, 00:09:22.278 "base_bdevs_list": [ 00:09:22.278 { 00:09:22.278 "name": null, 00:09:22.278 "uuid": "afa27e0a-e267-47e6-a3e8-9d27beb31a44", 00:09:22.278 "is_configured": false, 00:09:22.278 "data_offset": 0, 00:09:22.278 "data_size": 63488 00:09:22.278 }, 00:09:22.278 { 00:09:22.278 "name": null, 00:09:22.278 "uuid": "5dfe4123-797f-471c-9756-7e2b03cf4009", 00:09:22.278 "is_configured": false, 00:09:22.278 "data_offset": 0, 00:09:22.278 "data_size": 63488 00:09:22.278 }, 00:09:22.278 { 00:09:22.278 "name": "BaseBdev3", 00:09:22.278 "uuid": "8d8283c7-2ad3-4c0e-a424-2ef97254bdad", 00:09:22.278 "is_configured": true, 00:09:22.278 "data_offset": 2048, 00:09:22.278 "data_size": 63488 00:09:22.278 } 00:09:22.278 ] 00:09:22.278 }' 00:09:22.278 10:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.278 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.538 10:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:22.538 10:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.538 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.538 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.538 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.538 10:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:22.538 10:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:22.538 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.538 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.538 [2024-11-15 10:54:29.417322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:22.538 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.538 10:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:22.538 10:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.538 10:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.538 10:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.538 10:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.538 10:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.538 10:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.538 10:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.538 10:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.538 10:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.538 10:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.538 10:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.538 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.538 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.538 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.797 10:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.797 "name": "Existed_Raid", 00:09:22.797 "uuid": "00921186-b5dd-477d-9d6e-45706bacef1d", 00:09:22.797 "strip_size_kb": 64, 00:09:22.797 "state": "configuring", 00:09:22.797 "raid_level": "concat", 00:09:22.797 "superblock": true, 00:09:22.797 "num_base_bdevs": 3, 00:09:22.797 "num_base_bdevs_discovered": 2, 00:09:22.797 "num_base_bdevs_operational": 3, 00:09:22.797 "base_bdevs_list": [ 00:09:22.797 { 00:09:22.797 "name": null, 00:09:22.797 "uuid": "afa27e0a-e267-47e6-a3e8-9d27beb31a44", 00:09:22.797 "is_configured": false, 00:09:22.797 "data_offset": 0, 00:09:22.797 "data_size": 63488 00:09:22.797 }, 00:09:22.797 { 00:09:22.797 "name": "BaseBdev2", 00:09:22.797 "uuid": "5dfe4123-797f-471c-9756-7e2b03cf4009", 00:09:22.797 "is_configured": true, 00:09:22.797 "data_offset": 2048, 00:09:22.797 "data_size": 63488 00:09:22.797 }, 00:09:22.797 { 00:09:22.797 "name": "BaseBdev3", 00:09:22.797 "uuid": "8d8283c7-2ad3-4c0e-a424-2ef97254bdad", 00:09:22.797 "is_configured": true, 00:09:22.797 "data_offset": 2048, 00:09:22.797 "data_size": 63488 00:09:22.797 } 00:09:22.797 ] 00:09:22.797 }' 00:09:22.797 10:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.797 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.057 10:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.057 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.057 10:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:23.057 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.057 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.057 10:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:23.057 10:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:23.057 10:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.057 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.057 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.057 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.057 10:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u afa27e0a-e267-47e6-a3e8-9d27beb31a44 00:09:23.057 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.057 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.057 [2024-11-15 10:54:29.980608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:23.057 [2024-11-15 10:54:29.980865] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:23.057 [2024-11-15 10:54:29.980882] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:23.317 NewBaseBdev 00:09:23.317 [2024-11-15 10:54:29.981179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:23.317 [2024-11-15 10:54:29.981338] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:23.317 [2024-11-15 10:54:29.981348] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:23.317 [2024-11-15 10:54:29.981510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.317 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.317 10:54:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:23.317 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:09:23.317 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:23.317 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:23.317 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:23.317 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:23.317 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:23.317 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.317 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.317 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.317 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:23.317 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.318 10:54:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.318 [ 00:09:23.318 { 00:09:23.318 "name": "NewBaseBdev", 00:09:23.318 "aliases": [ 00:09:23.318 "afa27e0a-e267-47e6-a3e8-9d27beb31a44" 00:09:23.318 ], 00:09:23.318 "product_name": "Malloc disk", 00:09:23.318 "block_size": 512, 00:09:23.318 "num_blocks": 65536, 00:09:23.318 "uuid": "afa27e0a-e267-47e6-a3e8-9d27beb31a44", 00:09:23.318 "assigned_rate_limits": { 00:09:23.318 "rw_ios_per_sec": 0, 00:09:23.318 "rw_mbytes_per_sec": 0, 00:09:23.318 "r_mbytes_per_sec": 0, 00:09:23.318 "w_mbytes_per_sec": 0 00:09:23.318 }, 00:09:23.318 "claimed": true, 00:09:23.318 "claim_type": "exclusive_write", 00:09:23.318 "zoned": false, 00:09:23.318 "supported_io_types": { 00:09:23.318 "read": true, 00:09:23.318 "write": true, 00:09:23.318 "unmap": true, 00:09:23.318 "flush": true, 00:09:23.318 "reset": true, 00:09:23.318 "nvme_admin": false, 00:09:23.318 "nvme_io": false, 00:09:23.318 "nvme_io_md": false, 00:09:23.318 "write_zeroes": true, 00:09:23.318 "zcopy": true, 00:09:23.318 "get_zone_info": false, 00:09:23.318 "zone_management": false, 00:09:23.318 "zone_append": false, 00:09:23.318 "compare": false, 00:09:23.318 "compare_and_write": false, 00:09:23.318 "abort": true, 00:09:23.318 "seek_hole": false, 00:09:23.318 "seek_data": false, 00:09:23.318 "copy": true, 00:09:23.318 "nvme_iov_md": false 00:09:23.318 }, 00:09:23.318 "memory_domains": [ 00:09:23.318 { 00:09:23.318 "dma_device_id": "system", 00:09:23.318 "dma_device_type": 1 00:09:23.318 }, 00:09:23.318 { 00:09:23.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.318 "dma_device_type": 2 00:09:23.318 } 00:09:23.318 ], 00:09:23.318 "driver_specific": {} 00:09:23.318 } 00:09:23.318 ] 00:09:23.318 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.318 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:23.318 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:23.318 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.318 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.318 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.318 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.318 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.318 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.318 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.318 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.318 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.318 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.318 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.318 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.318 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.318 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.318 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.318 "name": "Existed_Raid", 00:09:23.318 "uuid": "00921186-b5dd-477d-9d6e-45706bacef1d", 00:09:23.318 "strip_size_kb": 64, 00:09:23.318 "state": "online", 00:09:23.318 "raid_level": "concat", 00:09:23.318 "superblock": true, 00:09:23.318 "num_base_bdevs": 3, 00:09:23.318 "num_base_bdevs_discovered": 3, 00:09:23.318 "num_base_bdevs_operational": 3, 00:09:23.318 "base_bdevs_list": [ 00:09:23.318 { 00:09:23.318 "name": "NewBaseBdev", 00:09:23.318 "uuid": "afa27e0a-e267-47e6-a3e8-9d27beb31a44", 00:09:23.318 "is_configured": true, 00:09:23.318 "data_offset": 2048, 00:09:23.318 "data_size": 63488 00:09:23.318 }, 00:09:23.318 { 00:09:23.318 "name": "BaseBdev2", 00:09:23.318 "uuid": "5dfe4123-797f-471c-9756-7e2b03cf4009", 00:09:23.318 "is_configured": true, 00:09:23.318 "data_offset": 2048, 00:09:23.318 "data_size": 63488 00:09:23.318 }, 00:09:23.318 { 00:09:23.318 "name": "BaseBdev3", 00:09:23.318 "uuid": "8d8283c7-2ad3-4c0e-a424-2ef97254bdad", 00:09:23.318 "is_configured": true, 00:09:23.318 "data_offset": 2048, 00:09:23.318 "data_size": 63488 00:09:23.318 } 00:09:23.318 ] 00:09:23.318 }' 00:09:23.318 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.318 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.577 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:23.577 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:23.577 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:23.577 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:23.578 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:23.578 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:23.578 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:23.578 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:23.578 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.578 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.578 [2024-11-15 10:54:30.468198] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:23.578 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.578 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:23.578 "name": "Existed_Raid", 00:09:23.578 "aliases": [ 00:09:23.578 "00921186-b5dd-477d-9d6e-45706bacef1d" 00:09:23.578 ], 00:09:23.578 "product_name": "Raid Volume", 00:09:23.578 "block_size": 512, 00:09:23.578 "num_blocks": 190464, 00:09:23.578 "uuid": "00921186-b5dd-477d-9d6e-45706bacef1d", 00:09:23.578 "assigned_rate_limits": { 00:09:23.578 "rw_ios_per_sec": 0, 00:09:23.578 "rw_mbytes_per_sec": 0, 00:09:23.578 "r_mbytes_per_sec": 0, 00:09:23.578 "w_mbytes_per_sec": 0 00:09:23.578 }, 00:09:23.578 "claimed": false, 00:09:23.578 "zoned": false, 00:09:23.578 "supported_io_types": { 00:09:23.578 "read": true, 00:09:23.578 "write": true, 00:09:23.578 "unmap": true, 00:09:23.578 "flush": true, 00:09:23.578 "reset": true, 00:09:23.578 "nvme_admin": false, 00:09:23.578 "nvme_io": false, 00:09:23.578 "nvme_io_md": false, 00:09:23.578 "write_zeroes": true, 00:09:23.578 "zcopy": false, 00:09:23.578 "get_zone_info": false, 00:09:23.578 "zone_management": false, 00:09:23.578 "zone_append": false, 00:09:23.578 "compare": false, 00:09:23.578 "compare_and_write": false, 00:09:23.578 "abort": false, 00:09:23.578 "seek_hole": false, 00:09:23.578 "seek_data": false, 00:09:23.578 "copy": false, 00:09:23.578 "nvme_iov_md": false 00:09:23.578 }, 00:09:23.578 "memory_domains": [ 00:09:23.578 { 00:09:23.578 "dma_device_id": "system", 00:09:23.578 "dma_device_type": 1 00:09:23.578 }, 00:09:23.578 { 00:09:23.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.578 "dma_device_type": 2 00:09:23.578 }, 00:09:23.578 { 00:09:23.578 "dma_device_id": "system", 00:09:23.578 "dma_device_type": 1 00:09:23.578 }, 00:09:23.578 { 00:09:23.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.578 "dma_device_type": 2 00:09:23.578 }, 00:09:23.578 { 00:09:23.578 "dma_device_id": "system", 00:09:23.578 "dma_device_type": 1 00:09:23.578 }, 00:09:23.578 { 00:09:23.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.578 "dma_device_type": 2 00:09:23.578 } 00:09:23.578 ], 00:09:23.578 "driver_specific": { 00:09:23.578 "raid": { 00:09:23.578 "uuid": "00921186-b5dd-477d-9d6e-45706bacef1d", 00:09:23.578 "strip_size_kb": 64, 00:09:23.578 "state": "online", 00:09:23.578 "raid_level": "concat", 00:09:23.578 "superblock": true, 00:09:23.578 "num_base_bdevs": 3, 00:09:23.578 "num_base_bdevs_discovered": 3, 00:09:23.578 "num_base_bdevs_operational": 3, 00:09:23.578 "base_bdevs_list": [ 00:09:23.578 { 00:09:23.578 "name": "NewBaseBdev", 00:09:23.578 "uuid": "afa27e0a-e267-47e6-a3e8-9d27beb31a44", 00:09:23.578 "is_configured": true, 00:09:23.578 "data_offset": 2048, 00:09:23.578 "data_size": 63488 00:09:23.578 }, 00:09:23.578 { 00:09:23.578 "name": "BaseBdev2", 00:09:23.578 "uuid": "5dfe4123-797f-471c-9756-7e2b03cf4009", 00:09:23.578 "is_configured": true, 00:09:23.578 "data_offset": 2048, 00:09:23.578 "data_size": 63488 00:09:23.578 }, 00:09:23.578 { 00:09:23.578 "name": "BaseBdev3", 00:09:23.578 "uuid": "8d8283c7-2ad3-4c0e-a424-2ef97254bdad", 00:09:23.578 "is_configured": true, 00:09:23.578 "data_offset": 2048, 00:09:23.578 "data_size": 63488 00:09:23.578 } 00:09:23.578 ] 00:09:23.578 } 00:09:23.578 } 00:09:23.578 }' 00:09:23.578 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:23.839 BaseBdev2 00:09:23.839 BaseBdev3' 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.839 [2024-11-15 10:54:30.719477] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:23.839 [2024-11-15 10:54:30.719515] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:23.839 [2024-11-15 10:54:30.719607] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:23.839 [2024-11-15 10:54:30.719662] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:23.839 [2024-11-15 10:54:30.719678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66379 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 66379 ']' 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 66379 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66379 00:09:23.839 killing process with pid 66379 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66379' 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 66379 00:09:23.839 [2024-11-15 10:54:30.759719] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:23.839 10:54:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 66379 00:09:24.407 [2024-11-15 10:54:31.068066] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:25.343 ************************************ 00:09:25.343 END TEST raid_state_function_test_sb 00:09:25.343 ************************************ 00:09:25.343 10:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:25.343 00:09:25.343 real 0m10.628s 00:09:25.343 user 0m16.847s 00:09:25.343 sys 0m1.878s 00:09:25.343 10:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:25.343 10:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.343 10:54:32 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:25.343 10:54:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:25.343 10:54:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:25.602 10:54:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:25.602 ************************************ 00:09:25.602 START TEST raid_superblock_test 00:09:25.602 ************************************ 00:09:25.602 10:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 3 00:09:25.602 10:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:25.602 10:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:25.602 10:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:25.602 10:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:25.602 10:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:25.602 10:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:25.602 10:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:25.602 10:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:25.602 10:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:25.602 10:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:25.602 10:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:25.602 10:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:25.602 10:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:25.602 10:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:25.602 10:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:25.602 10:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:25.603 10:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67001 00:09:25.603 10:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:25.603 10:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67001 00:09:25.603 10:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 67001 ']' 00:09:25.603 10:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.603 10:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:25.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.603 10:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.603 10:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:25.603 10:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.603 [2024-11-15 10:54:32.367896] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:09:25.603 [2024-11-15 10:54:32.368043] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67001 ] 00:09:25.862 [2024-11-15 10:54:32.542526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.862 [2024-11-15 10:54:32.656752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.122 [2024-11-15 10:54:32.857420] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.122 [2024-11-15 10:54:32.857487] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.382 malloc1 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.382 [2024-11-15 10:54:33.252285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:26.382 [2024-11-15 10:54:33.252373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.382 [2024-11-15 10:54:33.252396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:26.382 [2024-11-15 10:54:33.252406] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.382 [2024-11-15 10:54:33.254573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.382 [2024-11-15 10:54:33.254609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:26.382 pt1 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.382 malloc2 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.382 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.382 [2024-11-15 10:54:33.305953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:26.382 [2024-11-15 10:54:33.306016] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.382 [2024-11-15 10:54:33.306037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:26.382 [2024-11-15 10:54:33.306046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.642 [2024-11-15 10:54:33.308266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.642 [2024-11-15 10:54:33.308316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:26.642 pt2 00:09:26.642 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.642 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:26.642 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:26.642 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:26.642 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:26.642 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:26.642 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:26.642 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:26.642 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:26.642 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:26.642 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.642 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.642 malloc3 00:09:26.642 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.642 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:26.642 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.642 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.642 [2024-11-15 10:54:33.370821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:26.642 [2024-11-15 10:54:33.370903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.642 [2024-11-15 10:54:33.370924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:26.642 [2024-11-15 10:54:33.370933] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.642 [2024-11-15 10:54:33.373213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.642 [2024-11-15 10:54:33.373260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:26.642 pt3 00:09:26.642 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.642 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:26.642 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:26.642 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:26.642 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.642 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.642 [2024-11-15 10:54:33.382913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:26.642 [2024-11-15 10:54:33.384788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:26.642 [2024-11-15 10:54:33.384881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:26.642 [2024-11-15 10:54:33.385051] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:26.642 [2024-11-15 10:54:33.385067] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:26.642 [2024-11-15 10:54:33.385418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:26.642 [2024-11-15 10:54:33.385608] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:26.642 [2024-11-15 10:54:33.385619] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:26.642 [2024-11-15 10:54:33.385822] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.642 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.642 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:26.642 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:26.642 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.642 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.643 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.643 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.643 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.643 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.643 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.643 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.643 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.643 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.643 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.643 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.643 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.643 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.643 "name": "raid_bdev1", 00:09:26.643 "uuid": "7d594388-1c1f-4ec7-89ab-ea845112c803", 00:09:26.643 "strip_size_kb": 64, 00:09:26.643 "state": "online", 00:09:26.643 "raid_level": "concat", 00:09:26.643 "superblock": true, 00:09:26.643 "num_base_bdevs": 3, 00:09:26.643 "num_base_bdevs_discovered": 3, 00:09:26.643 "num_base_bdevs_operational": 3, 00:09:26.643 "base_bdevs_list": [ 00:09:26.643 { 00:09:26.643 "name": "pt1", 00:09:26.643 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:26.643 "is_configured": true, 00:09:26.643 "data_offset": 2048, 00:09:26.643 "data_size": 63488 00:09:26.643 }, 00:09:26.643 { 00:09:26.643 "name": "pt2", 00:09:26.643 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:26.643 "is_configured": true, 00:09:26.643 "data_offset": 2048, 00:09:26.643 "data_size": 63488 00:09:26.643 }, 00:09:26.643 { 00:09:26.643 "name": "pt3", 00:09:26.643 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:26.643 "is_configured": true, 00:09:26.643 "data_offset": 2048, 00:09:26.643 "data_size": 63488 00:09:26.643 } 00:09:26.643 ] 00:09:26.643 }' 00:09:26.643 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.643 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.222 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:27.222 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:27.222 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:27.222 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:27.222 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:27.222 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:27.222 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:27.222 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:27.222 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.222 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.223 [2024-11-15 10:54:33.854362] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:27.223 10:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.223 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:27.223 "name": "raid_bdev1", 00:09:27.223 "aliases": [ 00:09:27.223 "7d594388-1c1f-4ec7-89ab-ea845112c803" 00:09:27.223 ], 00:09:27.223 "product_name": "Raid Volume", 00:09:27.223 "block_size": 512, 00:09:27.223 "num_blocks": 190464, 00:09:27.223 "uuid": "7d594388-1c1f-4ec7-89ab-ea845112c803", 00:09:27.223 "assigned_rate_limits": { 00:09:27.223 "rw_ios_per_sec": 0, 00:09:27.223 "rw_mbytes_per_sec": 0, 00:09:27.223 "r_mbytes_per_sec": 0, 00:09:27.223 "w_mbytes_per_sec": 0 00:09:27.223 }, 00:09:27.223 "claimed": false, 00:09:27.223 "zoned": false, 00:09:27.223 "supported_io_types": { 00:09:27.223 "read": true, 00:09:27.223 "write": true, 00:09:27.223 "unmap": true, 00:09:27.223 "flush": true, 00:09:27.223 "reset": true, 00:09:27.223 "nvme_admin": false, 00:09:27.223 "nvme_io": false, 00:09:27.223 "nvme_io_md": false, 00:09:27.223 "write_zeroes": true, 00:09:27.223 "zcopy": false, 00:09:27.223 "get_zone_info": false, 00:09:27.223 "zone_management": false, 00:09:27.223 "zone_append": false, 00:09:27.223 "compare": false, 00:09:27.223 "compare_and_write": false, 00:09:27.223 "abort": false, 00:09:27.223 "seek_hole": false, 00:09:27.223 "seek_data": false, 00:09:27.223 "copy": false, 00:09:27.223 "nvme_iov_md": false 00:09:27.223 }, 00:09:27.223 "memory_domains": [ 00:09:27.223 { 00:09:27.223 "dma_device_id": "system", 00:09:27.223 "dma_device_type": 1 00:09:27.223 }, 00:09:27.223 { 00:09:27.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.223 "dma_device_type": 2 00:09:27.223 }, 00:09:27.223 { 00:09:27.223 "dma_device_id": "system", 00:09:27.223 "dma_device_type": 1 00:09:27.223 }, 00:09:27.223 { 00:09:27.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.223 "dma_device_type": 2 00:09:27.223 }, 00:09:27.223 { 00:09:27.223 "dma_device_id": "system", 00:09:27.223 "dma_device_type": 1 00:09:27.223 }, 00:09:27.223 { 00:09:27.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.223 "dma_device_type": 2 00:09:27.223 } 00:09:27.223 ], 00:09:27.223 "driver_specific": { 00:09:27.223 "raid": { 00:09:27.223 "uuid": "7d594388-1c1f-4ec7-89ab-ea845112c803", 00:09:27.223 "strip_size_kb": 64, 00:09:27.223 "state": "online", 00:09:27.223 "raid_level": "concat", 00:09:27.223 "superblock": true, 00:09:27.223 "num_base_bdevs": 3, 00:09:27.223 "num_base_bdevs_discovered": 3, 00:09:27.223 "num_base_bdevs_operational": 3, 00:09:27.223 "base_bdevs_list": [ 00:09:27.223 { 00:09:27.223 "name": "pt1", 00:09:27.223 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:27.223 "is_configured": true, 00:09:27.223 "data_offset": 2048, 00:09:27.223 "data_size": 63488 00:09:27.223 }, 00:09:27.223 { 00:09:27.223 "name": "pt2", 00:09:27.223 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:27.223 "is_configured": true, 00:09:27.223 "data_offset": 2048, 00:09:27.223 "data_size": 63488 00:09:27.223 }, 00:09:27.223 { 00:09:27.223 "name": "pt3", 00:09:27.223 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:27.223 "is_configured": true, 00:09:27.223 "data_offset": 2048, 00:09:27.223 "data_size": 63488 00:09:27.223 } 00:09:27.223 ] 00:09:27.223 } 00:09:27.223 } 00:09:27.223 }' 00:09:27.223 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:27.223 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:27.223 pt2 00:09:27.223 pt3' 00:09:27.223 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.223 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:27.223 10:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.223 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.223 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:27.223 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.223 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.223 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.223 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.223 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.223 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.223 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:27.223 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.223 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.223 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.223 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.223 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.223 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.223 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.223 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:27.223 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.223 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.223 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.223 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.223 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.223 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.223 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:27.223 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.223 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.223 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:27.483 [2024-11-15 10:54:34.149785] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:27.483 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.483 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7d594388-1c1f-4ec7-89ab-ea845112c803 00:09:27.483 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7d594388-1c1f-4ec7-89ab-ea845112c803 ']' 00:09:27.483 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:27.483 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.483 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.483 [2024-11-15 10:54:34.197450] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:27.483 [2024-11-15 10:54:34.197484] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:27.483 [2024-11-15 10:54:34.197578] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:27.483 [2024-11-15 10:54:34.197640] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:27.483 [2024-11-15 10:54:34.197650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:27.483 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.483 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:27.483 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.483 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.483 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.483 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.483 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:27.483 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:27.483 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:27.483 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:27.483 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.483 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.483 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.483 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:27.483 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:27.483 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.483 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.483 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.484 [2024-11-15 10:54:34.345253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:27.484 [2024-11-15 10:54:34.347148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:27.484 [2024-11-15 10:54:34.347201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:27.484 [2024-11-15 10:54:34.347250] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:27.484 [2024-11-15 10:54:34.347322] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:27.484 [2024-11-15 10:54:34.347343] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:27.484 [2024-11-15 10:54:34.347359] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:27.484 [2024-11-15 10:54:34.347369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:27.484 request: 00:09:27.484 { 00:09:27.484 "name": "raid_bdev1", 00:09:27.484 "raid_level": "concat", 00:09:27.484 "base_bdevs": [ 00:09:27.484 "malloc1", 00:09:27.484 "malloc2", 00:09:27.484 "malloc3" 00:09:27.484 ], 00:09:27.484 "strip_size_kb": 64, 00:09:27.484 "superblock": false, 00:09:27.484 "method": "bdev_raid_create", 00:09:27.484 "req_id": 1 00:09:27.484 } 00:09:27.484 Got JSON-RPC error response 00:09:27.484 response: 00:09:27.484 { 00:09:27.484 "code": -17, 00:09:27.484 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:27.484 } 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.484 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.744 [2024-11-15 10:54:34.409062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:27.744 [2024-11-15 10:54:34.409160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.744 [2024-11-15 10:54:34.409196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:27.744 [2024-11-15 10:54:34.409244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.744 [2024-11-15 10:54:34.411400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.744 [2024-11-15 10:54:34.411479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:27.744 [2024-11-15 10:54:34.411616] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:27.744 [2024-11-15 10:54:34.411695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:27.744 pt1 00:09:27.744 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.744 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:27.744 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:27.744 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.744 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.744 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.744 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.744 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.744 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.744 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.744 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.744 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.744 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.744 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.744 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.744 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.744 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.744 "name": "raid_bdev1", 00:09:27.744 "uuid": "7d594388-1c1f-4ec7-89ab-ea845112c803", 00:09:27.744 "strip_size_kb": 64, 00:09:27.744 "state": "configuring", 00:09:27.744 "raid_level": "concat", 00:09:27.744 "superblock": true, 00:09:27.744 "num_base_bdevs": 3, 00:09:27.744 "num_base_bdevs_discovered": 1, 00:09:27.744 "num_base_bdevs_operational": 3, 00:09:27.744 "base_bdevs_list": [ 00:09:27.744 { 00:09:27.744 "name": "pt1", 00:09:27.744 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:27.744 "is_configured": true, 00:09:27.744 "data_offset": 2048, 00:09:27.744 "data_size": 63488 00:09:27.744 }, 00:09:27.744 { 00:09:27.744 "name": null, 00:09:27.744 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:27.744 "is_configured": false, 00:09:27.744 "data_offset": 2048, 00:09:27.744 "data_size": 63488 00:09:27.744 }, 00:09:27.744 { 00:09:27.744 "name": null, 00:09:27.744 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:27.744 "is_configured": false, 00:09:27.744 "data_offset": 2048, 00:09:27.744 "data_size": 63488 00:09:27.744 } 00:09:27.744 ] 00:09:27.744 }' 00:09:27.744 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.744 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.006 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:28.006 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:28.006 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.006 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.006 [2024-11-15 10:54:34.872339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:28.006 [2024-11-15 10:54:34.872407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.006 [2024-11-15 10:54:34.872431] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:28.006 [2024-11-15 10:54:34.872440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.006 [2024-11-15 10:54:34.872888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.006 [2024-11-15 10:54:34.872905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:28.006 [2024-11-15 10:54:34.872995] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:28.006 [2024-11-15 10:54:34.873015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:28.006 pt2 00:09:28.006 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.006 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:28.006 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.006 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.006 [2024-11-15 10:54:34.880323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:28.006 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.006 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:28.006 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:28.006 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.006 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.006 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.006 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.006 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.006 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.006 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.006 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.006 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.006 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.006 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.006 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.006 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.006 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.006 "name": "raid_bdev1", 00:09:28.006 "uuid": "7d594388-1c1f-4ec7-89ab-ea845112c803", 00:09:28.006 "strip_size_kb": 64, 00:09:28.006 "state": "configuring", 00:09:28.006 "raid_level": "concat", 00:09:28.006 "superblock": true, 00:09:28.006 "num_base_bdevs": 3, 00:09:28.006 "num_base_bdevs_discovered": 1, 00:09:28.006 "num_base_bdevs_operational": 3, 00:09:28.006 "base_bdevs_list": [ 00:09:28.006 { 00:09:28.006 "name": "pt1", 00:09:28.006 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:28.006 "is_configured": true, 00:09:28.006 "data_offset": 2048, 00:09:28.006 "data_size": 63488 00:09:28.006 }, 00:09:28.006 { 00:09:28.006 "name": null, 00:09:28.006 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:28.006 "is_configured": false, 00:09:28.006 "data_offset": 0, 00:09:28.006 "data_size": 63488 00:09:28.006 }, 00:09:28.006 { 00:09:28.006 "name": null, 00:09:28.006 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:28.006 "is_configured": false, 00:09:28.006 "data_offset": 2048, 00:09:28.006 "data_size": 63488 00:09:28.006 } 00:09:28.006 ] 00:09:28.006 }' 00:09:28.006 10:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.006 10:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.576 [2024-11-15 10:54:35.347530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:28.576 [2024-11-15 10:54:35.347676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.576 [2024-11-15 10:54:35.347711] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:28.576 [2024-11-15 10:54:35.347741] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.576 [2024-11-15 10:54:35.348258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.576 [2024-11-15 10:54:35.348345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:28.576 [2024-11-15 10:54:35.348469] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:28.576 [2024-11-15 10:54:35.348523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:28.576 pt2 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.576 [2024-11-15 10:54:35.359506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:28.576 [2024-11-15 10:54:35.359606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.576 [2024-11-15 10:54:35.359639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:28.576 [2024-11-15 10:54:35.359668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.576 [2024-11-15 10:54:35.360153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.576 [2024-11-15 10:54:35.360216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:28.576 [2024-11-15 10:54:35.360339] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:28.576 [2024-11-15 10:54:35.360392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:28.576 [2024-11-15 10:54:35.360541] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:28.576 [2024-11-15 10:54:35.360581] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:28.576 [2024-11-15 10:54:35.360853] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:28.576 [2024-11-15 10:54:35.361029] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:28.576 [2024-11-15 10:54:35.361066] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:28.576 [2024-11-15 10:54:35.361253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.576 pt3 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.576 "name": "raid_bdev1", 00:09:28.576 "uuid": "7d594388-1c1f-4ec7-89ab-ea845112c803", 00:09:28.576 "strip_size_kb": 64, 00:09:28.576 "state": "online", 00:09:28.576 "raid_level": "concat", 00:09:28.576 "superblock": true, 00:09:28.576 "num_base_bdevs": 3, 00:09:28.576 "num_base_bdevs_discovered": 3, 00:09:28.576 "num_base_bdevs_operational": 3, 00:09:28.576 "base_bdevs_list": [ 00:09:28.576 { 00:09:28.576 "name": "pt1", 00:09:28.576 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:28.576 "is_configured": true, 00:09:28.576 "data_offset": 2048, 00:09:28.576 "data_size": 63488 00:09:28.576 }, 00:09:28.576 { 00:09:28.576 "name": "pt2", 00:09:28.576 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:28.576 "is_configured": true, 00:09:28.576 "data_offset": 2048, 00:09:28.576 "data_size": 63488 00:09:28.576 }, 00:09:28.576 { 00:09:28.576 "name": "pt3", 00:09:28.576 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:28.576 "is_configured": true, 00:09:28.576 "data_offset": 2048, 00:09:28.576 "data_size": 63488 00:09:28.576 } 00:09:28.576 ] 00:09:28.576 }' 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.576 10:54:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.146 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:29.146 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:29.146 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:29.146 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:29.146 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:29.146 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:29.146 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:29.146 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:29.146 10:54:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.146 10:54:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.146 [2024-11-15 10:54:35.859014] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:29.146 10:54:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.146 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:29.146 "name": "raid_bdev1", 00:09:29.146 "aliases": [ 00:09:29.146 "7d594388-1c1f-4ec7-89ab-ea845112c803" 00:09:29.146 ], 00:09:29.146 "product_name": "Raid Volume", 00:09:29.146 "block_size": 512, 00:09:29.146 "num_blocks": 190464, 00:09:29.146 "uuid": "7d594388-1c1f-4ec7-89ab-ea845112c803", 00:09:29.146 "assigned_rate_limits": { 00:09:29.146 "rw_ios_per_sec": 0, 00:09:29.146 "rw_mbytes_per_sec": 0, 00:09:29.146 "r_mbytes_per_sec": 0, 00:09:29.146 "w_mbytes_per_sec": 0 00:09:29.146 }, 00:09:29.146 "claimed": false, 00:09:29.146 "zoned": false, 00:09:29.146 "supported_io_types": { 00:09:29.146 "read": true, 00:09:29.146 "write": true, 00:09:29.146 "unmap": true, 00:09:29.146 "flush": true, 00:09:29.146 "reset": true, 00:09:29.146 "nvme_admin": false, 00:09:29.146 "nvme_io": false, 00:09:29.146 "nvme_io_md": false, 00:09:29.146 "write_zeroes": true, 00:09:29.146 "zcopy": false, 00:09:29.146 "get_zone_info": false, 00:09:29.146 "zone_management": false, 00:09:29.146 "zone_append": false, 00:09:29.146 "compare": false, 00:09:29.146 "compare_and_write": false, 00:09:29.146 "abort": false, 00:09:29.146 "seek_hole": false, 00:09:29.146 "seek_data": false, 00:09:29.146 "copy": false, 00:09:29.146 "nvme_iov_md": false 00:09:29.146 }, 00:09:29.146 "memory_domains": [ 00:09:29.146 { 00:09:29.146 "dma_device_id": "system", 00:09:29.146 "dma_device_type": 1 00:09:29.146 }, 00:09:29.146 { 00:09:29.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.146 "dma_device_type": 2 00:09:29.146 }, 00:09:29.146 { 00:09:29.146 "dma_device_id": "system", 00:09:29.146 "dma_device_type": 1 00:09:29.146 }, 00:09:29.146 { 00:09:29.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.146 "dma_device_type": 2 00:09:29.146 }, 00:09:29.146 { 00:09:29.146 "dma_device_id": "system", 00:09:29.146 "dma_device_type": 1 00:09:29.146 }, 00:09:29.146 { 00:09:29.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.146 "dma_device_type": 2 00:09:29.146 } 00:09:29.146 ], 00:09:29.146 "driver_specific": { 00:09:29.146 "raid": { 00:09:29.146 "uuid": "7d594388-1c1f-4ec7-89ab-ea845112c803", 00:09:29.146 "strip_size_kb": 64, 00:09:29.146 "state": "online", 00:09:29.146 "raid_level": "concat", 00:09:29.146 "superblock": true, 00:09:29.146 "num_base_bdevs": 3, 00:09:29.146 "num_base_bdevs_discovered": 3, 00:09:29.146 "num_base_bdevs_operational": 3, 00:09:29.146 "base_bdevs_list": [ 00:09:29.146 { 00:09:29.146 "name": "pt1", 00:09:29.146 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:29.146 "is_configured": true, 00:09:29.146 "data_offset": 2048, 00:09:29.146 "data_size": 63488 00:09:29.146 }, 00:09:29.146 { 00:09:29.146 "name": "pt2", 00:09:29.146 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:29.146 "is_configured": true, 00:09:29.146 "data_offset": 2048, 00:09:29.146 "data_size": 63488 00:09:29.146 }, 00:09:29.146 { 00:09:29.146 "name": "pt3", 00:09:29.146 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:29.146 "is_configured": true, 00:09:29.146 "data_offset": 2048, 00:09:29.146 "data_size": 63488 00:09:29.146 } 00:09:29.146 ] 00:09:29.146 } 00:09:29.146 } 00:09:29.146 }' 00:09:29.146 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:29.146 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:29.146 pt2 00:09:29.146 pt3' 00:09:29.146 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.146 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:29.146 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.146 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:29.146 10:54:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.146 10:54:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.146 10:54:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.146 10:54:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.146 10:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.146 10:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.146 10:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.147 10:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.147 10:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:29.147 10:54:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.147 10:54:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.147 10:54:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.405 [2024-11-15 10:54:36.158494] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7d594388-1c1f-4ec7-89ab-ea845112c803 '!=' 7d594388-1c1f-4ec7-89ab-ea845112c803 ']' 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67001 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 67001 ']' 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 67001 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67001 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67001' 00:09:29.405 killing process with pid 67001 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 67001 00:09:29.405 [2024-11-15 10:54:36.227480] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:29.405 10:54:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 67001 00:09:29.405 [2024-11-15 10:54:36.227655] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.405 [2024-11-15 10:54:36.227719] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:29.405 [2024-11-15 10:54:36.227731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:29.665 [2024-11-15 10:54:36.539189] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:31.044 10:54:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:31.044 00:09:31.044 real 0m5.377s 00:09:31.044 user 0m7.794s 00:09:31.044 sys 0m0.900s 00:09:31.044 ************************************ 00:09:31.044 END TEST raid_superblock_test 00:09:31.044 ************************************ 00:09:31.044 10:54:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:31.044 10:54:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.044 10:54:37 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:31.044 10:54:37 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:31.044 10:54:37 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:31.044 10:54:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:31.044 ************************************ 00:09:31.044 START TEST raid_read_error_test 00:09:31.044 ************************************ 00:09:31.044 10:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 read 00:09:31.044 10:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:31.044 10:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:31.044 10:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:31.044 10:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:31.044 10:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:31.044 10:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:31.044 10:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:31.044 10:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:31.044 10:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:31.044 10:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:31.044 10:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:31.044 10:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:31.044 10:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:31.044 10:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:31.044 10:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:31.044 10:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:31.044 10:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:31.044 10:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:31.044 10:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:31.044 10:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:31.044 10:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:31.045 10:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:31.045 10:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:31.045 10:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:31.045 10:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:31.045 10:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.j1n7xWrCu9 00:09:31.045 10:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67254 00:09:31.045 10:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:31.045 10:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67254 00:09:31.045 10:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 67254 ']' 00:09:31.045 10:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.045 10:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:31.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.045 10:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.045 10:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:31.045 10:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.045 [2024-11-15 10:54:37.822772] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:09:31.045 [2024-11-15 10:54:37.823123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67254 ] 00:09:31.304 [2024-11-15 10:54:38.013410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.304 [2024-11-15 10:54:38.131075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.564 [2024-11-15 10:54:38.350434] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.564 [2024-11-15 10:54:38.350487] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.824 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:31.824 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:31.824 10:54:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:31.824 10:54:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:31.824 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.824 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.824 BaseBdev1_malloc 00:09:31.824 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.824 10:54:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:31.824 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.824 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.824 true 00:09:31.824 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.824 10:54:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:31.824 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.824 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.824 [2024-11-15 10:54:38.719610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:31.824 [2024-11-15 10:54:38.719710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.824 [2024-11-15 10:54:38.719733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:31.824 [2024-11-15 10:54:38.719744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.824 [2024-11-15 10:54:38.721855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.824 [2024-11-15 10:54:38.721898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:31.824 BaseBdev1 00:09:31.824 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.824 10:54:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:31.824 10:54:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:31.824 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.824 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.084 BaseBdev2_malloc 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.084 true 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.084 [2024-11-15 10:54:38.785445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:32.084 [2024-11-15 10:54:38.785608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.084 [2024-11-15 10:54:38.785639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:32.084 [2024-11-15 10:54:38.785654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.084 [2024-11-15 10:54:38.788224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.084 [2024-11-15 10:54:38.788278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:32.084 BaseBdev2 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.084 BaseBdev3_malloc 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.084 true 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.084 [2024-11-15 10:54:38.863437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:32.084 [2024-11-15 10:54:38.863493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.084 [2024-11-15 10:54:38.863508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:32.084 [2024-11-15 10:54:38.863518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.084 [2024-11-15 10:54:38.865669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.084 [2024-11-15 10:54:38.865756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:32.084 BaseBdev3 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.084 [2024-11-15 10:54:38.875508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:32.084 [2024-11-15 10:54:38.877334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:32.084 [2024-11-15 10:54:38.877408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:32.084 [2024-11-15 10:54:38.877600] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:32.084 [2024-11-15 10:54:38.877611] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:32.084 [2024-11-15 10:54:38.877859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:32.084 [2024-11-15 10:54:38.878015] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:32.084 [2024-11-15 10:54:38.878027] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:32.084 [2024-11-15 10:54:38.878187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.084 10:54:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.085 10:54:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.085 10:54:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.085 10:54:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.085 10:54:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.085 10:54:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.085 10:54:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.085 10:54:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.085 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.085 10:54:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:32.085 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.085 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.085 10:54:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.085 "name": "raid_bdev1", 00:09:32.085 "uuid": "eb088a23-6c97-4e24-a673-523933a64b90", 00:09:32.085 "strip_size_kb": 64, 00:09:32.085 "state": "online", 00:09:32.085 "raid_level": "concat", 00:09:32.085 "superblock": true, 00:09:32.085 "num_base_bdevs": 3, 00:09:32.085 "num_base_bdevs_discovered": 3, 00:09:32.085 "num_base_bdevs_operational": 3, 00:09:32.085 "base_bdevs_list": [ 00:09:32.085 { 00:09:32.085 "name": "BaseBdev1", 00:09:32.085 "uuid": "1b8ad5b0-d37e-5787-b605-c0001df2813b", 00:09:32.085 "is_configured": true, 00:09:32.085 "data_offset": 2048, 00:09:32.085 "data_size": 63488 00:09:32.085 }, 00:09:32.085 { 00:09:32.085 "name": "BaseBdev2", 00:09:32.085 "uuid": "dbd638d6-250e-5342-959f-5f37e7f53e0a", 00:09:32.085 "is_configured": true, 00:09:32.085 "data_offset": 2048, 00:09:32.085 "data_size": 63488 00:09:32.085 }, 00:09:32.085 { 00:09:32.085 "name": "BaseBdev3", 00:09:32.085 "uuid": "62e32a92-b653-54c3-9ce7-25c7f0d0adc4", 00:09:32.085 "is_configured": true, 00:09:32.085 "data_offset": 2048, 00:09:32.085 "data_size": 63488 00:09:32.085 } 00:09:32.085 ] 00:09:32.085 }' 00:09:32.085 10:54:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.085 10:54:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.654 10:54:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:32.654 10:54:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:32.654 [2024-11-15 10:54:39.519939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:33.592 10:54:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:33.592 10:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.592 10:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.592 10:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.592 10:54:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:33.592 10:54:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:33.592 10:54:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:33.592 10:54:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:33.592 10:54:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:33.592 10:54:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.592 10:54:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.592 10:54:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.592 10:54:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.592 10:54:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.592 10:54:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.592 10:54:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.592 10:54:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.592 10:54:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.592 10:54:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.592 10:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.592 10:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.592 10:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.592 10:54:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.592 "name": "raid_bdev1", 00:09:33.592 "uuid": "eb088a23-6c97-4e24-a673-523933a64b90", 00:09:33.592 "strip_size_kb": 64, 00:09:33.592 "state": "online", 00:09:33.592 "raid_level": "concat", 00:09:33.592 "superblock": true, 00:09:33.592 "num_base_bdevs": 3, 00:09:33.592 "num_base_bdevs_discovered": 3, 00:09:33.592 "num_base_bdevs_operational": 3, 00:09:33.592 "base_bdevs_list": [ 00:09:33.592 { 00:09:33.592 "name": "BaseBdev1", 00:09:33.592 "uuid": "1b8ad5b0-d37e-5787-b605-c0001df2813b", 00:09:33.592 "is_configured": true, 00:09:33.592 "data_offset": 2048, 00:09:33.592 "data_size": 63488 00:09:33.592 }, 00:09:33.592 { 00:09:33.592 "name": "BaseBdev2", 00:09:33.592 "uuid": "dbd638d6-250e-5342-959f-5f37e7f53e0a", 00:09:33.592 "is_configured": true, 00:09:33.592 "data_offset": 2048, 00:09:33.592 "data_size": 63488 00:09:33.592 }, 00:09:33.592 { 00:09:33.592 "name": "BaseBdev3", 00:09:33.592 "uuid": "62e32a92-b653-54c3-9ce7-25c7f0d0adc4", 00:09:33.592 "is_configured": true, 00:09:33.592 "data_offset": 2048, 00:09:33.592 "data_size": 63488 00:09:33.592 } 00:09:33.592 ] 00:09:33.592 }' 00:09:33.592 10:54:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.592 10:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.162 10:54:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:34.162 10:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.162 10:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.162 [2024-11-15 10:54:40.872181] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:34.162 [2024-11-15 10:54:40.872216] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:34.162 [2024-11-15 10:54:40.875081] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:34.162 [2024-11-15 10:54:40.875132] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.162 [2024-11-15 10:54:40.875175] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:34.162 [2024-11-15 10:54:40.875189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:34.162 { 00:09:34.162 "results": [ 00:09:34.162 { 00:09:34.162 "job": "raid_bdev1", 00:09:34.162 "core_mask": "0x1", 00:09:34.162 "workload": "randrw", 00:09:34.162 "percentage": 50, 00:09:34.162 "status": "finished", 00:09:34.162 "queue_depth": 1, 00:09:34.162 "io_size": 131072, 00:09:34.162 "runtime": 1.352988, 00:09:34.162 "iops": 14899.614778549403, 00:09:34.162 "mibps": 1862.4518473186754, 00:09:34.162 "io_failed": 1, 00:09:34.162 "io_timeout": 0, 00:09:34.162 "avg_latency_us": 93.22380536494073, 00:09:34.162 "min_latency_us": 26.047161572052403, 00:09:34.162 "max_latency_us": 1595.4724890829693 00:09:34.162 } 00:09:34.162 ], 00:09:34.162 "core_count": 1 00:09:34.162 } 00:09:34.162 10:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.162 10:54:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67254 00:09:34.162 10:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 67254 ']' 00:09:34.162 10:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 67254 00:09:34.162 10:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:09:34.162 10:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:34.162 10:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67254 00:09:34.162 killing process with pid 67254 00:09:34.162 10:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:34.162 10:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:34.162 10:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67254' 00:09:34.162 10:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 67254 00:09:34.162 [2024-11-15 10:54:40.920657] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:34.162 10:54:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 67254 00:09:34.423 [2024-11-15 10:54:41.169582] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:35.804 10:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.j1n7xWrCu9 00:09:35.804 10:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:35.804 10:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:35.804 10:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:35.804 10:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:35.804 ************************************ 00:09:35.804 END TEST raid_read_error_test 00:09:35.804 ************************************ 00:09:35.804 10:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:35.804 10:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:35.804 10:54:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:35.804 00:09:35.804 real 0m4.698s 00:09:35.804 user 0m5.629s 00:09:35.804 sys 0m0.596s 00:09:35.804 10:54:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:35.804 10:54:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.804 10:54:42 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:35.804 10:54:42 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:35.804 10:54:42 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:35.804 10:54:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:35.804 ************************************ 00:09:35.804 START TEST raid_write_error_test 00:09:35.804 ************************************ 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 write 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.KVjljYagtf 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67405 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67405 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 67405 ']' 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:35.804 10:54:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.804 [2024-11-15 10:54:42.584913] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:09:35.804 [2024-11-15 10:54:42.585125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67405 ] 00:09:36.064 [2024-11-15 10:54:42.762557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.064 [2024-11-15 10:54:42.887748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.323 [2024-11-15 10:54:43.100282] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.323 [2024-11-15 10:54:43.100433] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.583 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:36.583 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:36.583 10:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:36.583 10:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:36.583 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.583 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.583 BaseBdev1_malloc 00:09:36.842 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.842 10:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:36.842 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.842 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.842 true 00:09:36.842 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.842 10:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:36.842 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.842 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.842 [2024-11-15 10:54:43.526847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:36.842 [2024-11-15 10:54:43.526905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.842 [2024-11-15 10:54:43.526942] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:36.842 [2024-11-15 10:54:43.526953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.842 [2024-11-15 10:54:43.529252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.842 [2024-11-15 10:54:43.529295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:36.842 BaseBdev1 00:09:36.842 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.842 10:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:36.842 10:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:36.842 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.842 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.842 BaseBdev2_malloc 00:09:36.842 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.842 10:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:36.842 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.842 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.842 true 00:09:36.842 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.842 10:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:36.842 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.842 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.842 [2024-11-15 10:54:43.593087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:36.842 [2024-11-15 10:54:43.593170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.842 [2024-11-15 10:54:43.593191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:36.843 [2024-11-15 10:54:43.593202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.843 [2024-11-15 10:54:43.595422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.843 [2024-11-15 10:54:43.595468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:36.843 BaseBdev2 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.843 BaseBdev3_malloc 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.843 true 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.843 [2024-11-15 10:54:43.673051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:36.843 [2024-11-15 10:54:43.673110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.843 [2024-11-15 10:54:43.673131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:36.843 [2024-11-15 10:54:43.673141] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.843 [2024-11-15 10:54:43.675273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.843 [2024-11-15 10:54:43.675399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:36.843 BaseBdev3 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.843 [2024-11-15 10:54:43.685098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:36.843 [2024-11-15 10:54:43.687021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:36.843 [2024-11-15 10:54:43.687158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:36.843 [2024-11-15 10:54:43.687381] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:36.843 [2024-11-15 10:54:43.687395] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:36.843 [2024-11-15 10:54:43.687651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:36.843 [2024-11-15 10:54:43.687819] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:36.843 [2024-11-15 10:54:43.687833] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:36.843 [2024-11-15 10:54:43.688007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.843 "name": "raid_bdev1", 00:09:36.843 "uuid": "3559f5f7-8a01-4d3e-a044-fa5566475a6e", 00:09:36.843 "strip_size_kb": 64, 00:09:36.843 "state": "online", 00:09:36.843 "raid_level": "concat", 00:09:36.843 "superblock": true, 00:09:36.843 "num_base_bdevs": 3, 00:09:36.843 "num_base_bdevs_discovered": 3, 00:09:36.843 "num_base_bdevs_operational": 3, 00:09:36.843 "base_bdevs_list": [ 00:09:36.843 { 00:09:36.843 "name": "BaseBdev1", 00:09:36.843 "uuid": "4e98c1c6-816d-5538-97c6-a4dfb6c0080c", 00:09:36.843 "is_configured": true, 00:09:36.843 "data_offset": 2048, 00:09:36.843 "data_size": 63488 00:09:36.843 }, 00:09:36.843 { 00:09:36.843 "name": "BaseBdev2", 00:09:36.843 "uuid": "2385e10a-8666-5ac1-a8c8-a3de5aa8af9f", 00:09:36.843 "is_configured": true, 00:09:36.843 "data_offset": 2048, 00:09:36.843 "data_size": 63488 00:09:36.843 }, 00:09:36.843 { 00:09:36.843 "name": "BaseBdev3", 00:09:36.843 "uuid": "bdf00c32-a2de-5e27-8865-efc99e28440e", 00:09:36.843 "is_configured": true, 00:09:36.843 "data_offset": 2048, 00:09:36.843 "data_size": 63488 00:09:36.843 } 00:09:36.843 ] 00:09:36.843 }' 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.843 10:54:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.413 10:54:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:37.413 10:54:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:37.413 [2024-11-15 10:54:44.253447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:38.349 10:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:38.349 10:54:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.349 10:54:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.349 10:54:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.349 10:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:38.349 10:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:38.349 10:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:38.349 10:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:38.349 10:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:38.349 10:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.349 10:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.349 10:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.349 10:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.349 10:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.349 10:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.349 10:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.349 10:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.349 10:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.349 10:54:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.349 10:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:38.349 10:54:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.349 10:54:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.349 10:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.349 "name": "raid_bdev1", 00:09:38.349 "uuid": "3559f5f7-8a01-4d3e-a044-fa5566475a6e", 00:09:38.349 "strip_size_kb": 64, 00:09:38.349 "state": "online", 00:09:38.349 "raid_level": "concat", 00:09:38.349 "superblock": true, 00:09:38.349 "num_base_bdevs": 3, 00:09:38.349 "num_base_bdevs_discovered": 3, 00:09:38.349 "num_base_bdevs_operational": 3, 00:09:38.349 "base_bdevs_list": [ 00:09:38.349 { 00:09:38.349 "name": "BaseBdev1", 00:09:38.349 "uuid": "4e98c1c6-816d-5538-97c6-a4dfb6c0080c", 00:09:38.349 "is_configured": true, 00:09:38.349 "data_offset": 2048, 00:09:38.349 "data_size": 63488 00:09:38.349 }, 00:09:38.349 { 00:09:38.349 "name": "BaseBdev2", 00:09:38.349 "uuid": "2385e10a-8666-5ac1-a8c8-a3de5aa8af9f", 00:09:38.349 "is_configured": true, 00:09:38.349 "data_offset": 2048, 00:09:38.349 "data_size": 63488 00:09:38.349 }, 00:09:38.349 { 00:09:38.349 "name": "BaseBdev3", 00:09:38.349 "uuid": "bdf00c32-a2de-5e27-8865-efc99e28440e", 00:09:38.349 "is_configured": true, 00:09:38.349 "data_offset": 2048, 00:09:38.349 "data_size": 63488 00:09:38.349 } 00:09:38.349 ] 00:09:38.349 }' 00:09:38.349 10:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.349 10:54:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.918 10:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:38.918 10:54:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.918 10:54:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.918 [2024-11-15 10:54:45.613641] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:38.918 [2024-11-15 10:54:45.613672] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:38.918 [2024-11-15 10:54:45.616663] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:38.918 [2024-11-15 10:54:45.616712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.918 [2024-11-15 10:54:45.616753] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:38.918 [2024-11-15 10:54:45.616778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:38.918 { 00:09:38.918 "results": [ 00:09:38.918 { 00:09:38.918 "job": "raid_bdev1", 00:09:38.918 "core_mask": "0x1", 00:09:38.918 "workload": "randrw", 00:09:38.918 "percentage": 50, 00:09:38.918 "status": "finished", 00:09:38.918 "queue_depth": 1, 00:09:38.918 "io_size": 131072, 00:09:38.918 "runtime": 1.36095, 00:09:38.918 "iops": 14881.516587677725, 00:09:38.918 "mibps": 1860.1895734597156, 00:09:38.918 "io_failed": 1, 00:09:38.918 "io_timeout": 0, 00:09:38.918 "avg_latency_us": 93.23667880795986, 00:09:38.918 "min_latency_us": 26.270742358078603, 00:09:38.918 "max_latency_us": 1616.9362445414847 00:09:38.918 } 00:09:38.918 ], 00:09:38.918 "core_count": 1 00:09:38.918 } 00:09:38.918 10:54:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.918 10:54:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67405 00:09:38.918 10:54:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 67405 ']' 00:09:38.918 10:54:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 67405 00:09:38.918 10:54:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:09:38.918 10:54:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:38.918 10:54:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67405 00:09:38.918 killing process with pid 67405 00:09:38.918 10:54:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:38.918 10:54:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:38.918 10:54:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67405' 00:09:38.918 10:54:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 67405 00:09:38.918 [2024-11-15 10:54:45.659328] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:38.918 10:54:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 67405 00:09:39.178 [2024-11-15 10:54:45.901542] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:40.558 10:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.KVjljYagtf 00:09:40.558 10:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:40.558 10:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:40.558 10:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:40.558 10:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:40.558 10:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:40.558 10:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:40.558 ************************************ 00:09:40.558 END TEST raid_write_error_test 00:09:40.558 ************************************ 00:09:40.558 10:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:40.558 00:09:40.558 real 0m4.666s 00:09:40.558 user 0m5.561s 00:09:40.558 sys 0m0.585s 00:09:40.558 10:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:40.558 10:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.558 10:54:47 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:40.558 10:54:47 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:40.558 10:54:47 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:40.558 10:54:47 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:40.558 10:54:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:40.558 ************************************ 00:09:40.558 START TEST raid_state_function_test 00:09:40.558 ************************************ 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 false 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67543 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:40.558 Process raid pid: 67543 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67543' 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67543 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 67543 ']' 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:40.558 10:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.558 [2024-11-15 10:54:47.320336] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:09:40.558 [2024-11-15 10:54:47.320537] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.818 [2024-11-15 10:54:47.501753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.818 [2024-11-15 10:54:47.619775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.077 [2024-11-15 10:54:47.839323] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.077 [2024-11-15 10:54:47.839418] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.337 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:41.337 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:09:41.338 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:41.338 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.338 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.338 [2024-11-15 10:54:48.167574] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:41.338 [2024-11-15 10:54:48.167724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:41.338 [2024-11-15 10:54:48.167759] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:41.338 [2024-11-15 10:54:48.167786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:41.338 [2024-11-15 10:54:48.167807] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:41.338 [2024-11-15 10:54:48.167831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:41.338 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.338 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:41.338 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.338 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.338 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.338 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.338 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.338 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.338 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.338 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.338 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.338 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.338 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.338 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.338 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.338 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.338 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.338 "name": "Existed_Raid", 00:09:41.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.338 "strip_size_kb": 0, 00:09:41.338 "state": "configuring", 00:09:41.338 "raid_level": "raid1", 00:09:41.338 "superblock": false, 00:09:41.338 "num_base_bdevs": 3, 00:09:41.338 "num_base_bdevs_discovered": 0, 00:09:41.338 "num_base_bdevs_operational": 3, 00:09:41.338 "base_bdevs_list": [ 00:09:41.338 { 00:09:41.338 "name": "BaseBdev1", 00:09:41.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.338 "is_configured": false, 00:09:41.338 "data_offset": 0, 00:09:41.338 "data_size": 0 00:09:41.338 }, 00:09:41.338 { 00:09:41.338 "name": "BaseBdev2", 00:09:41.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.338 "is_configured": false, 00:09:41.338 "data_offset": 0, 00:09:41.338 "data_size": 0 00:09:41.338 }, 00:09:41.338 { 00:09:41.338 "name": "BaseBdev3", 00:09:41.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.338 "is_configured": false, 00:09:41.338 "data_offset": 0, 00:09:41.338 "data_size": 0 00:09:41.338 } 00:09:41.338 ] 00:09:41.338 }' 00:09:41.338 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.338 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.948 [2024-11-15 10:54:48.634704] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:41.948 [2024-11-15 10:54:48.634825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.948 [2024-11-15 10:54:48.646692] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:41.948 [2024-11-15 10:54:48.646756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:41.948 [2024-11-15 10:54:48.646765] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:41.948 [2024-11-15 10:54:48.646773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:41.948 [2024-11-15 10:54:48.646779] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:41.948 [2024-11-15 10:54:48.646788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.948 [2024-11-15 10:54:48.694543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:41.948 BaseBdev1 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.948 [ 00:09:41.948 { 00:09:41.948 "name": "BaseBdev1", 00:09:41.948 "aliases": [ 00:09:41.948 "5cf9e0ab-aaba-4ba4-8cfa-47b9bf5f460c" 00:09:41.948 ], 00:09:41.948 "product_name": "Malloc disk", 00:09:41.948 "block_size": 512, 00:09:41.948 "num_blocks": 65536, 00:09:41.948 "uuid": "5cf9e0ab-aaba-4ba4-8cfa-47b9bf5f460c", 00:09:41.948 "assigned_rate_limits": { 00:09:41.948 "rw_ios_per_sec": 0, 00:09:41.948 "rw_mbytes_per_sec": 0, 00:09:41.948 "r_mbytes_per_sec": 0, 00:09:41.948 "w_mbytes_per_sec": 0 00:09:41.948 }, 00:09:41.948 "claimed": true, 00:09:41.948 "claim_type": "exclusive_write", 00:09:41.948 "zoned": false, 00:09:41.948 "supported_io_types": { 00:09:41.948 "read": true, 00:09:41.948 "write": true, 00:09:41.948 "unmap": true, 00:09:41.948 "flush": true, 00:09:41.948 "reset": true, 00:09:41.948 "nvme_admin": false, 00:09:41.948 "nvme_io": false, 00:09:41.948 "nvme_io_md": false, 00:09:41.948 "write_zeroes": true, 00:09:41.948 "zcopy": true, 00:09:41.948 "get_zone_info": false, 00:09:41.948 "zone_management": false, 00:09:41.948 "zone_append": false, 00:09:41.948 "compare": false, 00:09:41.948 "compare_and_write": false, 00:09:41.948 "abort": true, 00:09:41.948 "seek_hole": false, 00:09:41.948 "seek_data": false, 00:09:41.948 "copy": true, 00:09:41.948 "nvme_iov_md": false 00:09:41.948 }, 00:09:41.948 "memory_domains": [ 00:09:41.948 { 00:09:41.948 "dma_device_id": "system", 00:09:41.948 "dma_device_type": 1 00:09:41.948 }, 00:09:41.948 { 00:09:41.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.948 "dma_device_type": 2 00:09:41.948 } 00:09:41.948 ], 00:09:41.948 "driver_specific": {} 00:09:41.948 } 00:09:41.948 ] 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.948 "name": "Existed_Raid", 00:09:41.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.948 "strip_size_kb": 0, 00:09:41.948 "state": "configuring", 00:09:41.948 "raid_level": "raid1", 00:09:41.948 "superblock": false, 00:09:41.948 "num_base_bdevs": 3, 00:09:41.948 "num_base_bdevs_discovered": 1, 00:09:41.948 "num_base_bdevs_operational": 3, 00:09:41.948 "base_bdevs_list": [ 00:09:41.948 { 00:09:41.948 "name": "BaseBdev1", 00:09:41.948 "uuid": "5cf9e0ab-aaba-4ba4-8cfa-47b9bf5f460c", 00:09:41.948 "is_configured": true, 00:09:41.948 "data_offset": 0, 00:09:41.948 "data_size": 65536 00:09:41.948 }, 00:09:41.948 { 00:09:41.948 "name": "BaseBdev2", 00:09:41.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.948 "is_configured": false, 00:09:41.948 "data_offset": 0, 00:09:41.948 "data_size": 0 00:09:41.948 }, 00:09:41.948 { 00:09:41.948 "name": "BaseBdev3", 00:09:41.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.948 "is_configured": false, 00:09:41.948 "data_offset": 0, 00:09:41.948 "data_size": 0 00:09:41.948 } 00:09:41.948 ] 00:09:41.948 }' 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.948 10:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.517 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:42.517 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.517 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.517 [2024-11-15 10:54:49.161802] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:42.517 [2024-11-15 10:54:49.161925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:42.517 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.517 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:42.517 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.517 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.517 [2024-11-15 10:54:49.173840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:42.517 [2024-11-15 10:54:49.175955] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:42.517 [2024-11-15 10:54:49.176011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:42.517 [2024-11-15 10:54:49.176024] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:42.517 [2024-11-15 10:54:49.176034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:42.517 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.517 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:42.517 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:42.517 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:42.517 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.517 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.517 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.517 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.517 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.518 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.518 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.518 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.518 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.518 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.518 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.518 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.518 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.518 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.518 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.518 "name": "Existed_Raid", 00:09:42.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.518 "strip_size_kb": 0, 00:09:42.518 "state": "configuring", 00:09:42.518 "raid_level": "raid1", 00:09:42.518 "superblock": false, 00:09:42.518 "num_base_bdevs": 3, 00:09:42.518 "num_base_bdevs_discovered": 1, 00:09:42.518 "num_base_bdevs_operational": 3, 00:09:42.518 "base_bdevs_list": [ 00:09:42.518 { 00:09:42.518 "name": "BaseBdev1", 00:09:42.518 "uuid": "5cf9e0ab-aaba-4ba4-8cfa-47b9bf5f460c", 00:09:42.518 "is_configured": true, 00:09:42.518 "data_offset": 0, 00:09:42.518 "data_size": 65536 00:09:42.518 }, 00:09:42.518 { 00:09:42.518 "name": "BaseBdev2", 00:09:42.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.518 "is_configured": false, 00:09:42.518 "data_offset": 0, 00:09:42.518 "data_size": 0 00:09:42.518 }, 00:09:42.518 { 00:09:42.518 "name": "BaseBdev3", 00:09:42.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.518 "is_configured": false, 00:09:42.518 "data_offset": 0, 00:09:42.518 "data_size": 0 00:09:42.518 } 00:09:42.518 ] 00:09:42.518 }' 00:09:42.518 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.518 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.778 [2024-11-15 10:54:49.647832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:42.778 BaseBdev2 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.778 [ 00:09:42.778 { 00:09:42.778 "name": "BaseBdev2", 00:09:42.778 "aliases": [ 00:09:42.778 "1e31a96b-ae02-488f-b5e0-c021a1aecff0" 00:09:42.778 ], 00:09:42.778 "product_name": "Malloc disk", 00:09:42.778 "block_size": 512, 00:09:42.778 "num_blocks": 65536, 00:09:42.778 "uuid": "1e31a96b-ae02-488f-b5e0-c021a1aecff0", 00:09:42.778 "assigned_rate_limits": { 00:09:42.778 "rw_ios_per_sec": 0, 00:09:42.778 "rw_mbytes_per_sec": 0, 00:09:42.778 "r_mbytes_per_sec": 0, 00:09:42.778 "w_mbytes_per_sec": 0 00:09:42.778 }, 00:09:42.778 "claimed": true, 00:09:42.778 "claim_type": "exclusive_write", 00:09:42.778 "zoned": false, 00:09:42.778 "supported_io_types": { 00:09:42.778 "read": true, 00:09:42.778 "write": true, 00:09:42.778 "unmap": true, 00:09:42.778 "flush": true, 00:09:42.778 "reset": true, 00:09:42.778 "nvme_admin": false, 00:09:42.778 "nvme_io": false, 00:09:42.778 "nvme_io_md": false, 00:09:42.778 "write_zeroes": true, 00:09:42.778 "zcopy": true, 00:09:42.778 "get_zone_info": false, 00:09:42.778 "zone_management": false, 00:09:42.778 "zone_append": false, 00:09:42.778 "compare": false, 00:09:42.778 "compare_and_write": false, 00:09:42.778 "abort": true, 00:09:42.778 "seek_hole": false, 00:09:42.778 "seek_data": false, 00:09:42.778 "copy": true, 00:09:42.778 "nvme_iov_md": false 00:09:42.778 }, 00:09:42.778 "memory_domains": [ 00:09:42.778 { 00:09:42.778 "dma_device_id": "system", 00:09:42.778 "dma_device_type": 1 00:09:42.778 }, 00:09:42.778 { 00:09:42.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.778 "dma_device_type": 2 00:09:42.778 } 00:09:42.778 ], 00:09:42.778 "driver_specific": {} 00:09:42.778 } 00:09:42.778 ] 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.778 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.038 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.038 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.038 "name": "Existed_Raid", 00:09:43.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.038 "strip_size_kb": 0, 00:09:43.038 "state": "configuring", 00:09:43.038 "raid_level": "raid1", 00:09:43.038 "superblock": false, 00:09:43.038 "num_base_bdevs": 3, 00:09:43.038 "num_base_bdevs_discovered": 2, 00:09:43.038 "num_base_bdevs_operational": 3, 00:09:43.038 "base_bdevs_list": [ 00:09:43.038 { 00:09:43.038 "name": "BaseBdev1", 00:09:43.038 "uuid": "5cf9e0ab-aaba-4ba4-8cfa-47b9bf5f460c", 00:09:43.038 "is_configured": true, 00:09:43.038 "data_offset": 0, 00:09:43.038 "data_size": 65536 00:09:43.038 }, 00:09:43.038 { 00:09:43.038 "name": "BaseBdev2", 00:09:43.038 "uuid": "1e31a96b-ae02-488f-b5e0-c021a1aecff0", 00:09:43.038 "is_configured": true, 00:09:43.038 "data_offset": 0, 00:09:43.038 "data_size": 65536 00:09:43.038 }, 00:09:43.038 { 00:09:43.038 "name": "BaseBdev3", 00:09:43.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.039 "is_configured": false, 00:09:43.039 "data_offset": 0, 00:09:43.039 "data_size": 0 00:09:43.039 } 00:09:43.039 ] 00:09:43.039 }' 00:09:43.039 10:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.039 10:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.298 [2024-11-15 10:54:50.173937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:43.298 [2024-11-15 10:54:50.173992] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:43.298 [2024-11-15 10:54:50.174004] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:43.298 [2024-11-15 10:54:50.174264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:43.298 [2024-11-15 10:54:50.174469] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:43.298 [2024-11-15 10:54:50.174479] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:43.298 [2024-11-15 10:54:50.174725] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.298 BaseBdev3 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.298 [ 00:09:43.298 { 00:09:43.298 "name": "BaseBdev3", 00:09:43.298 "aliases": [ 00:09:43.298 "3ef54476-3946-4402-8164-2de5cf10c441" 00:09:43.298 ], 00:09:43.298 "product_name": "Malloc disk", 00:09:43.298 "block_size": 512, 00:09:43.298 "num_blocks": 65536, 00:09:43.298 "uuid": "3ef54476-3946-4402-8164-2de5cf10c441", 00:09:43.298 "assigned_rate_limits": { 00:09:43.298 "rw_ios_per_sec": 0, 00:09:43.298 "rw_mbytes_per_sec": 0, 00:09:43.298 "r_mbytes_per_sec": 0, 00:09:43.298 "w_mbytes_per_sec": 0 00:09:43.298 }, 00:09:43.298 "claimed": true, 00:09:43.298 "claim_type": "exclusive_write", 00:09:43.298 "zoned": false, 00:09:43.298 "supported_io_types": { 00:09:43.298 "read": true, 00:09:43.298 "write": true, 00:09:43.298 "unmap": true, 00:09:43.298 "flush": true, 00:09:43.298 "reset": true, 00:09:43.298 "nvme_admin": false, 00:09:43.298 "nvme_io": false, 00:09:43.298 "nvme_io_md": false, 00:09:43.298 "write_zeroes": true, 00:09:43.298 "zcopy": true, 00:09:43.298 "get_zone_info": false, 00:09:43.298 "zone_management": false, 00:09:43.298 "zone_append": false, 00:09:43.298 "compare": false, 00:09:43.298 "compare_and_write": false, 00:09:43.298 "abort": true, 00:09:43.298 "seek_hole": false, 00:09:43.298 "seek_data": false, 00:09:43.298 "copy": true, 00:09:43.298 "nvme_iov_md": false 00:09:43.298 }, 00:09:43.298 "memory_domains": [ 00:09:43.298 { 00:09:43.298 "dma_device_id": "system", 00:09:43.298 "dma_device_type": 1 00:09:43.298 }, 00:09:43.298 { 00:09:43.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.298 "dma_device_type": 2 00:09:43.298 } 00:09:43.298 ], 00:09:43.298 "driver_specific": {} 00:09:43.298 } 00:09:43.298 ] 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.298 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.558 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.558 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.558 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.558 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.558 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.558 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.558 "name": "Existed_Raid", 00:09:43.558 "uuid": "2d3bd118-a264-4bed-8690-047b2ec96f5d", 00:09:43.558 "strip_size_kb": 0, 00:09:43.558 "state": "online", 00:09:43.558 "raid_level": "raid1", 00:09:43.558 "superblock": false, 00:09:43.558 "num_base_bdevs": 3, 00:09:43.558 "num_base_bdevs_discovered": 3, 00:09:43.558 "num_base_bdevs_operational": 3, 00:09:43.558 "base_bdevs_list": [ 00:09:43.558 { 00:09:43.558 "name": "BaseBdev1", 00:09:43.558 "uuid": "5cf9e0ab-aaba-4ba4-8cfa-47b9bf5f460c", 00:09:43.558 "is_configured": true, 00:09:43.558 "data_offset": 0, 00:09:43.558 "data_size": 65536 00:09:43.558 }, 00:09:43.558 { 00:09:43.558 "name": "BaseBdev2", 00:09:43.558 "uuid": "1e31a96b-ae02-488f-b5e0-c021a1aecff0", 00:09:43.558 "is_configured": true, 00:09:43.558 "data_offset": 0, 00:09:43.558 "data_size": 65536 00:09:43.558 }, 00:09:43.558 { 00:09:43.558 "name": "BaseBdev3", 00:09:43.558 "uuid": "3ef54476-3946-4402-8164-2de5cf10c441", 00:09:43.558 "is_configured": true, 00:09:43.558 "data_offset": 0, 00:09:43.558 "data_size": 65536 00:09:43.558 } 00:09:43.558 ] 00:09:43.558 }' 00:09:43.558 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.558 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.817 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:43.817 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:43.817 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:43.817 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:43.817 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:43.817 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:43.817 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:43.817 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:43.817 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.817 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.817 [2024-11-15 10:54:50.649543] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:43.817 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.817 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:43.817 "name": "Existed_Raid", 00:09:43.817 "aliases": [ 00:09:43.817 "2d3bd118-a264-4bed-8690-047b2ec96f5d" 00:09:43.817 ], 00:09:43.817 "product_name": "Raid Volume", 00:09:43.817 "block_size": 512, 00:09:43.817 "num_blocks": 65536, 00:09:43.817 "uuid": "2d3bd118-a264-4bed-8690-047b2ec96f5d", 00:09:43.817 "assigned_rate_limits": { 00:09:43.817 "rw_ios_per_sec": 0, 00:09:43.817 "rw_mbytes_per_sec": 0, 00:09:43.817 "r_mbytes_per_sec": 0, 00:09:43.817 "w_mbytes_per_sec": 0 00:09:43.817 }, 00:09:43.817 "claimed": false, 00:09:43.817 "zoned": false, 00:09:43.817 "supported_io_types": { 00:09:43.817 "read": true, 00:09:43.817 "write": true, 00:09:43.817 "unmap": false, 00:09:43.817 "flush": false, 00:09:43.817 "reset": true, 00:09:43.817 "nvme_admin": false, 00:09:43.817 "nvme_io": false, 00:09:43.817 "nvme_io_md": false, 00:09:43.817 "write_zeroes": true, 00:09:43.817 "zcopy": false, 00:09:43.817 "get_zone_info": false, 00:09:43.817 "zone_management": false, 00:09:43.817 "zone_append": false, 00:09:43.817 "compare": false, 00:09:43.817 "compare_and_write": false, 00:09:43.817 "abort": false, 00:09:43.817 "seek_hole": false, 00:09:43.817 "seek_data": false, 00:09:43.817 "copy": false, 00:09:43.817 "nvme_iov_md": false 00:09:43.817 }, 00:09:43.817 "memory_domains": [ 00:09:43.817 { 00:09:43.817 "dma_device_id": "system", 00:09:43.817 "dma_device_type": 1 00:09:43.817 }, 00:09:43.817 { 00:09:43.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.818 "dma_device_type": 2 00:09:43.818 }, 00:09:43.818 { 00:09:43.818 "dma_device_id": "system", 00:09:43.818 "dma_device_type": 1 00:09:43.818 }, 00:09:43.818 { 00:09:43.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.818 "dma_device_type": 2 00:09:43.818 }, 00:09:43.818 { 00:09:43.818 "dma_device_id": "system", 00:09:43.818 "dma_device_type": 1 00:09:43.818 }, 00:09:43.818 { 00:09:43.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.818 "dma_device_type": 2 00:09:43.818 } 00:09:43.818 ], 00:09:43.818 "driver_specific": { 00:09:43.818 "raid": { 00:09:43.818 "uuid": "2d3bd118-a264-4bed-8690-047b2ec96f5d", 00:09:43.818 "strip_size_kb": 0, 00:09:43.818 "state": "online", 00:09:43.818 "raid_level": "raid1", 00:09:43.818 "superblock": false, 00:09:43.818 "num_base_bdevs": 3, 00:09:43.818 "num_base_bdevs_discovered": 3, 00:09:43.818 "num_base_bdevs_operational": 3, 00:09:43.818 "base_bdevs_list": [ 00:09:43.818 { 00:09:43.818 "name": "BaseBdev1", 00:09:43.818 "uuid": "5cf9e0ab-aaba-4ba4-8cfa-47b9bf5f460c", 00:09:43.818 "is_configured": true, 00:09:43.818 "data_offset": 0, 00:09:43.818 "data_size": 65536 00:09:43.818 }, 00:09:43.818 { 00:09:43.818 "name": "BaseBdev2", 00:09:43.818 "uuid": "1e31a96b-ae02-488f-b5e0-c021a1aecff0", 00:09:43.818 "is_configured": true, 00:09:43.818 "data_offset": 0, 00:09:43.818 "data_size": 65536 00:09:43.818 }, 00:09:43.818 { 00:09:43.818 "name": "BaseBdev3", 00:09:43.818 "uuid": "3ef54476-3946-4402-8164-2de5cf10c441", 00:09:43.818 "is_configured": true, 00:09:43.818 "data_offset": 0, 00:09:43.818 "data_size": 65536 00:09:43.818 } 00:09:43.818 ] 00:09:43.818 } 00:09:43.818 } 00:09:43.818 }' 00:09:43.818 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:43.818 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:43.818 BaseBdev2 00:09:43.818 BaseBdev3' 00:09:43.818 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.076 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:44.076 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.076 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:44.076 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.076 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.076 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.076 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.076 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.076 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.076 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.076 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:44.076 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.076 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.076 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.076 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.076 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.076 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.076 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.076 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:44.076 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.076 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.076 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.076 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.076 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.076 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.076 10:54:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:44.076 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.076 10:54:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.076 [2024-11-15 10:54:50.924819] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:44.335 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.335 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:44.335 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:44.335 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:44.335 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:44.335 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:44.335 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:44.335 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.335 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.335 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.335 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.335 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:44.335 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.335 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.335 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.335 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.335 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.335 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.335 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.335 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.335 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.335 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.335 "name": "Existed_Raid", 00:09:44.335 "uuid": "2d3bd118-a264-4bed-8690-047b2ec96f5d", 00:09:44.335 "strip_size_kb": 0, 00:09:44.335 "state": "online", 00:09:44.335 "raid_level": "raid1", 00:09:44.335 "superblock": false, 00:09:44.335 "num_base_bdevs": 3, 00:09:44.335 "num_base_bdevs_discovered": 2, 00:09:44.335 "num_base_bdevs_operational": 2, 00:09:44.335 "base_bdevs_list": [ 00:09:44.335 { 00:09:44.335 "name": null, 00:09:44.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.335 "is_configured": false, 00:09:44.335 "data_offset": 0, 00:09:44.335 "data_size": 65536 00:09:44.335 }, 00:09:44.335 { 00:09:44.335 "name": "BaseBdev2", 00:09:44.335 "uuid": "1e31a96b-ae02-488f-b5e0-c021a1aecff0", 00:09:44.335 "is_configured": true, 00:09:44.335 "data_offset": 0, 00:09:44.335 "data_size": 65536 00:09:44.335 }, 00:09:44.335 { 00:09:44.335 "name": "BaseBdev3", 00:09:44.335 "uuid": "3ef54476-3946-4402-8164-2de5cf10c441", 00:09:44.335 "is_configured": true, 00:09:44.335 "data_offset": 0, 00:09:44.335 "data_size": 65536 00:09:44.335 } 00:09:44.335 ] 00:09:44.335 }' 00:09:44.335 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.335 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.593 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:44.593 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:44.593 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.593 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:44.593 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.593 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.593 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.593 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:44.593 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:44.593 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:44.593 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.593 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.593 [2024-11-15 10:54:51.499003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:44.851 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.851 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:44.851 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:44.851 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.851 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:44.851 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.851 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.851 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.851 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:44.851 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:44.851 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:44.851 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.851 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.851 [2024-11-15 10:54:51.668093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:44.851 [2024-11-15 10:54:51.668266] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:44.851 [2024-11-15 10:54:51.765083] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.851 [2024-11-15 10:54:51.765212] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:44.851 [2024-11-15 10:54:51.765254] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:44.851 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.851 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:44.851 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:44.851 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:44.851 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.851 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.851 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.110 BaseBdev2 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.110 [ 00:09:45.110 { 00:09:45.110 "name": "BaseBdev2", 00:09:45.110 "aliases": [ 00:09:45.110 "b5c353a9-89d8-4b35-991a-121713628432" 00:09:45.110 ], 00:09:45.110 "product_name": "Malloc disk", 00:09:45.110 "block_size": 512, 00:09:45.110 "num_blocks": 65536, 00:09:45.110 "uuid": "b5c353a9-89d8-4b35-991a-121713628432", 00:09:45.110 "assigned_rate_limits": { 00:09:45.110 "rw_ios_per_sec": 0, 00:09:45.110 "rw_mbytes_per_sec": 0, 00:09:45.110 "r_mbytes_per_sec": 0, 00:09:45.110 "w_mbytes_per_sec": 0 00:09:45.110 }, 00:09:45.110 "claimed": false, 00:09:45.110 "zoned": false, 00:09:45.110 "supported_io_types": { 00:09:45.110 "read": true, 00:09:45.110 "write": true, 00:09:45.110 "unmap": true, 00:09:45.110 "flush": true, 00:09:45.110 "reset": true, 00:09:45.110 "nvme_admin": false, 00:09:45.110 "nvme_io": false, 00:09:45.110 "nvme_io_md": false, 00:09:45.110 "write_zeroes": true, 00:09:45.110 "zcopy": true, 00:09:45.110 "get_zone_info": false, 00:09:45.110 "zone_management": false, 00:09:45.110 "zone_append": false, 00:09:45.110 "compare": false, 00:09:45.110 "compare_and_write": false, 00:09:45.110 "abort": true, 00:09:45.110 "seek_hole": false, 00:09:45.110 "seek_data": false, 00:09:45.110 "copy": true, 00:09:45.110 "nvme_iov_md": false 00:09:45.110 }, 00:09:45.110 "memory_domains": [ 00:09:45.110 { 00:09:45.110 "dma_device_id": "system", 00:09:45.110 "dma_device_type": 1 00:09:45.110 }, 00:09:45.110 { 00:09:45.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.110 "dma_device_type": 2 00:09:45.110 } 00:09:45.110 ], 00:09:45.110 "driver_specific": {} 00:09:45.110 } 00:09:45.110 ] 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.110 BaseBdev3 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.110 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.110 [ 00:09:45.110 { 00:09:45.110 "name": "BaseBdev3", 00:09:45.110 "aliases": [ 00:09:45.110 "6975267c-d62e-42ac-801c-667250ebd35c" 00:09:45.110 ], 00:09:45.110 "product_name": "Malloc disk", 00:09:45.110 "block_size": 512, 00:09:45.110 "num_blocks": 65536, 00:09:45.110 "uuid": "6975267c-d62e-42ac-801c-667250ebd35c", 00:09:45.110 "assigned_rate_limits": { 00:09:45.110 "rw_ios_per_sec": 0, 00:09:45.110 "rw_mbytes_per_sec": 0, 00:09:45.110 "r_mbytes_per_sec": 0, 00:09:45.110 "w_mbytes_per_sec": 0 00:09:45.110 }, 00:09:45.111 "claimed": false, 00:09:45.111 "zoned": false, 00:09:45.111 "supported_io_types": { 00:09:45.111 "read": true, 00:09:45.111 "write": true, 00:09:45.111 "unmap": true, 00:09:45.111 "flush": true, 00:09:45.111 "reset": true, 00:09:45.111 "nvme_admin": false, 00:09:45.111 "nvme_io": false, 00:09:45.111 "nvme_io_md": false, 00:09:45.111 "write_zeroes": true, 00:09:45.111 "zcopy": true, 00:09:45.111 "get_zone_info": false, 00:09:45.111 "zone_management": false, 00:09:45.111 "zone_append": false, 00:09:45.111 "compare": false, 00:09:45.111 "compare_and_write": false, 00:09:45.111 "abort": true, 00:09:45.111 "seek_hole": false, 00:09:45.111 "seek_data": false, 00:09:45.111 "copy": true, 00:09:45.111 "nvme_iov_md": false 00:09:45.111 }, 00:09:45.111 "memory_domains": [ 00:09:45.111 { 00:09:45.111 "dma_device_id": "system", 00:09:45.111 "dma_device_type": 1 00:09:45.111 }, 00:09:45.111 { 00:09:45.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.111 "dma_device_type": 2 00:09:45.111 } 00:09:45.111 ], 00:09:45.111 "driver_specific": {} 00:09:45.111 } 00:09:45.111 ] 00:09:45.111 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.111 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:45.111 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:45.111 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:45.111 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:45.111 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.111 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.111 [2024-11-15 10:54:51.990880] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:45.111 [2024-11-15 10:54:51.990982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:45.111 [2024-11-15 10:54:51.991027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:45.111 [2024-11-15 10:54:51.993113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:45.111 10:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.111 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:45.111 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.111 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.111 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.111 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.111 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.111 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.111 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.111 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.111 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.111 10:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.111 10:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.111 10:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.111 10:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.111 10:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.370 10:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.370 "name": "Existed_Raid", 00:09:45.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.370 "strip_size_kb": 0, 00:09:45.370 "state": "configuring", 00:09:45.370 "raid_level": "raid1", 00:09:45.370 "superblock": false, 00:09:45.370 "num_base_bdevs": 3, 00:09:45.370 "num_base_bdevs_discovered": 2, 00:09:45.370 "num_base_bdevs_operational": 3, 00:09:45.370 "base_bdevs_list": [ 00:09:45.370 { 00:09:45.370 "name": "BaseBdev1", 00:09:45.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.370 "is_configured": false, 00:09:45.370 "data_offset": 0, 00:09:45.370 "data_size": 0 00:09:45.370 }, 00:09:45.370 { 00:09:45.370 "name": "BaseBdev2", 00:09:45.370 "uuid": "b5c353a9-89d8-4b35-991a-121713628432", 00:09:45.370 "is_configured": true, 00:09:45.370 "data_offset": 0, 00:09:45.370 "data_size": 65536 00:09:45.370 }, 00:09:45.370 { 00:09:45.370 "name": "BaseBdev3", 00:09:45.370 "uuid": "6975267c-d62e-42ac-801c-667250ebd35c", 00:09:45.370 "is_configured": true, 00:09:45.370 "data_offset": 0, 00:09:45.370 "data_size": 65536 00:09:45.370 } 00:09:45.370 ] 00:09:45.370 }' 00:09:45.370 10:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.370 10:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.629 10:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:45.629 10:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.629 10:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.629 [2024-11-15 10:54:52.438148] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:45.629 10:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.629 10:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:45.629 10:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.629 10:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.629 10:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.629 10:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.629 10:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.629 10:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.629 10:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.629 10:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.629 10:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.629 10:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.629 10:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.629 10:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.629 10:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.629 10:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.629 10:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.629 "name": "Existed_Raid", 00:09:45.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.629 "strip_size_kb": 0, 00:09:45.629 "state": "configuring", 00:09:45.629 "raid_level": "raid1", 00:09:45.629 "superblock": false, 00:09:45.629 "num_base_bdevs": 3, 00:09:45.629 "num_base_bdevs_discovered": 1, 00:09:45.629 "num_base_bdevs_operational": 3, 00:09:45.629 "base_bdevs_list": [ 00:09:45.629 { 00:09:45.629 "name": "BaseBdev1", 00:09:45.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.629 "is_configured": false, 00:09:45.629 "data_offset": 0, 00:09:45.629 "data_size": 0 00:09:45.629 }, 00:09:45.629 { 00:09:45.629 "name": null, 00:09:45.629 "uuid": "b5c353a9-89d8-4b35-991a-121713628432", 00:09:45.629 "is_configured": false, 00:09:45.629 "data_offset": 0, 00:09:45.629 "data_size": 65536 00:09:45.629 }, 00:09:45.629 { 00:09:45.629 "name": "BaseBdev3", 00:09:45.629 "uuid": "6975267c-d62e-42ac-801c-667250ebd35c", 00:09:45.629 "is_configured": true, 00:09:45.629 "data_offset": 0, 00:09:45.629 "data_size": 65536 00:09:45.629 } 00:09:45.629 ] 00:09:45.629 }' 00:09:45.629 10:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.629 10:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.199 10:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:46.199 10:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.199 10:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.199 10:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.199 10:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.199 10:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:46.199 10:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:46.199 10:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.199 10:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.199 [2024-11-15 10:54:52.970452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:46.199 BaseBdev1 00:09:46.199 10:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.199 10:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:46.199 10:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:46.199 10:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:46.199 10:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:46.199 10:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:46.199 10:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:46.199 10:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:46.199 10:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.199 10:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.199 10:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.199 10:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:46.199 10:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.199 10:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.199 [ 00:09:46.199 { 00:09:46.199 "name": "BaseBdev1", 00:09:46.199 "aliases": [ 00:09:46.199 "854c9ab9-c064-4c0e-b574-2fbc2dd8ab33" 00:09:46.199 ], 00:09:46.199 "product_name": "Malloc disk", 00:09:46.199 "block_size": 512, 00:09:46.199 "num_blocks": 65536, 00:09:46.199 "uuid": "854c9ab9-c064-4c0e-b574-2fbc2dd8ab33", 00:09:46.199 "assigned_rate_limits": { 00:09:46.199 "rw_ios_per_sec": 0, 00:09:46.199 "rw_mbytes_per_sec": 0, 00:09:46.199 "r_mbytes_per_sec": 0, 00:09:46.199 "w_mbytes_per_sec": 0 00:09:46.199 }, 00:09:46.199 "claimed": true, 00:09:46.199 "claim_type": "exclusive_write", 00:09:46.199 "zoned": false, 00:09:46.199 "supported_io_types": { 00:09:46.199 "read": true, 00:09:46.199 "write": true, 00:09:46.199 "unmap": true, 00:09:46.199 "flush": true, 00:09:46.199 "reset": true, 00:09:46.199 "nvme_admin": false, 00:09:46.199 "nvme_io": false, 00:09:46.199 "nvme_io_md": false, 00:09:46.199 "write_zeroes": true, 00:09:46.199 "zcopy": true, 00:09:46.199 "get_zone_info": false, 00:09:46.199 "zone_management": false, 00:09:46.199 "zone_append": false, 00:09:46.199 "compare": false, 00:09:46.200 "compare_and_write": false, 00:09:46.200 "abort": true, 00:09:46.200 "seek_hole": false, 00:09:46.200 "seek_data": false, 00:09:46.200 "copy": true, 00:09:46.200 "nvme_iov_md": false 00:09:46.200 }, 00:09:46.200 "memory_domains": [ 00:09:46.200 { 00:09:46.200 "dma_device_id": "system", 00:09:46.200 "dma_device_type": 1 00:09:46.200 }, 00:09:46.200 { 00:09:46.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.200 "dma_device_type": 2 00:09:46.200 } 00:09:46.200 ], 00:09:46.200 "driver_specific": {} 00:09:46.200 } 00:09:46.200 ] 00:09:46.200 10:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.200 10:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:46.200 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:46.200 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.200 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.200 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.200 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.200 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.200 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.200 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.200 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.200 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.200 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.200 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.200 10:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.200 10:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.200 10:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.200 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.200 "name": "Existed_Raid", 00:09:46.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.200 "strip_size_kb": 0, 00:09:46.200 "state": "configuring", 00:09:46.200 "raid_level": "raid1", 00:09:46.200 "superblock": false, 00:09:46.200 "num_base_bdevs": 3, 00:09:46.200 "num_base_bdevs_discovered": 2, 00:09:46.200 "num_base_bdevs_operational": 3, 00:09:46.200 "base_bdevs_list": [ 00:09:46.200 { 00:09:46.200 "name": "BaseBdev1", 00:09:46.200 "uuid": "854c9ab9-c064-4c0e-b574-2fbc2dd8ab33", 00:09:46.200 "is_configured": true, 00:09:46.200 "data_offset": 0, 00:09:46.200 "data_size": 65536 00:09:46.200 }, 00:09:46.200 { 00:09:46.200 "name": null, 00:09:46.200 "uuid": "b5c353a9-89d8-4b35-991a-121713628432", 00:09:46.200 "is_configured": false, 00:09:46.200 "data_offset": 0, 00:09:46.200 "data_size": 65536 00:09:46.200 }, 00:09:46.200 { 00:09:46.200 "name": "BaseBdev3", 00:09:46.200 "uuid": "6975267c-d62e-42ac-801c-667250ebd35c", 00:09:46.200 "is_configured": true, 00:09:46.200 "data_offset": 0, 00:09:46.200 "data_size": 65536 00:09:46.200 } 00:09:46.200 ] 00:09:46.200 }' 00:09:46.200 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.200 10:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.775 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:46.775 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.775 10:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.776 10:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.776 10:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.776 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:46.776 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:46.776 10:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.776 10:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.776 [2024-11-15 10:54:53.465661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:46.776 10:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.776 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:46.776 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.776 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.776 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.776 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.776 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.776 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.776 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.776 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.776 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.776 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.776 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.776 10:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.776 10:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.776 10:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.776 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.776 "name": "Existed_Raid", 00:09:46.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.776 "strip_size_kb": 0, 00:09:46.776 "state": "configuring", 00:09:46.776 "raid_level": "raid1", 00:09:46.776 "superblock": false, 00:09:46.776 "num_base_bdevs": 3, 00:09:46.776 "num_base_bdevs_discovered": 1, 00:09:46.776 "num_base_bdevs_operational": 3, 00:09:46.776 "base_bdevs_list": [ 00:09:46.776 { 00:09:46.776 "name": "BaseBdev1", 00:09:46.776 "uuid": "854c9ab9-c064-4c0e-b574-2fbc2dd8ab33", 00:09:46.776 "is_configured": true, 00:09:46.776 "data_offset": 0, 00:09:46.776 "data_size": 65536 00:09:46.776 }, 00:09:46.776 { 00:09:46.776 "name": null, 00:09:46.776 "uuid": "b5c353a9-89d8-4b35-991a-121713628432", 00:09:46.776 "is_configured": false, 00:09:46.776 "data_offset": 0, 00:09:46.776 "data_size": 65536 00:09:46.776 }, 00:09:46.776 { 00:09:46.776 "name": null, 00:09:46.776 "uuid": "6975267c-d62e-42ac-801c-667250ebd35c", 00:09:46.776 "is_configured": false, 00:09:46.776 "data_offset": 0, 00:09:46.776 "data_size": 65536 00:09:46.776 } 00:09:46.776 ] 00:09:46.776 }' 00:09:46.776 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.776 10:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.035 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.035 10:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.035 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:47.035 10:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.035 10:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.035 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:47.035 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:47.035 10:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.035 10:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.035 [2024-11-15 10:54:53.956888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:47.293 10:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.293 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:47.293 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.293 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.293 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.293 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.293 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.293 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.293 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.293 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.293 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.293 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.293 10:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.293 10:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.293 10:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.293 10:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.293 10:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.293 "name": "Existed_Raid", 00:09:47.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.293 "strip_size_kb": 0, 00:09:47.293 "state": "configuring", 00:09:47.293 "raid_level": "raid1", 00:09:47.293 "superblock": false, 00:09:47.293 "num_base_bdevs": 3, 00:09:47.293 "num_base_bdevs_discovered": 2, 00:09:47.293 "num_base_bdevs_operational": 3, 00:09:47.293 "base_bdevs_list": [ 00:09:47.293 { 00:09:47.293 "name": "BaseBdev1", 00:09:47.293 "uuid": "854c9ab9-c064-4c0e-b574-2fbc2dd8ab33", 00:09:47.293 "is_configured": true, 00:09:47.293 "data_offset": 0, 00:09:47.293 "data_size": 65536 00:09:47.293 }, 00:09:47.293 { 00:09:47.293 "name": null, 00:09:47.293 "uuid": "b5c353a9-89d8-4b35-991a-121713628432", 00:09:47.293 "is_configured": false, 00:09:47.293 "data_offset": 0, 00:09:47.293 "data_size": 65536 00:09:47.293 }, 00:09:47.293 { 00:09:47.293 "name": "BaseBdev3", 00:09:47.293 "uuid": "6975267c-d62e-42ac-801c-667250ebd35c", 00:09:47.293 "is_configured": true, 00:09:47.293 "data_offset": 0, 00:09:47.293 "data_size": 65536 00:09:47.293 } 00:09:47.293 ] 00:09:47.293 }' 00:09:47.293 10:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.293 10:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.552 10:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:47.552 10:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.552 10:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.552 10:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.552 10:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.810 10:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:47.811 10:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:47.811 10:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.811 10:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.811 [2024-11-15 10:54:54.484039] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:47.811 10:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.811 10:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:47.811 10:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.811 10:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.811 10:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.811 10:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.811 10:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.811 10:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.811 10:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.811 10:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.811 10:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.811 10:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.811 10:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.811 10:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.811 10:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.811 10:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.811 10:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.811 "name": "Existed_Raid", 00:09:47.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.811 "strip_size_kb": 0, 00:09:47.811 "state": "configuring", 00:09:47.811 "raid_level": "raid1", 00:09:47.811 "superblock": false, 00:09:47.811 "num_base_bdevs": 3, 00:09:47.811 "num_base_bdevs_discovered": 1, 00:09:47.811 "num_base_bdevs_operational": 3, 00:09:47.811 "base_bdevs_list": [ 00:09:47.811 { 00:09:47.811 "name": null, 00:09:47.811 "uuid": "854c9ab9-c064-4c0e-b574-2fbc2dd8ab33", 00:09:47.811 "is_configured": false, 00:09:47.811 "data_offset": 0, 00:09:47.811 "data_size": 65536 00:09:47.811 }, 00:09:47.811 { 00:09:47.811 "name": null, 00:09:47.811 "uuid": "b5c353a9-89d8-4b35-991a-121713628432", 00:09:47.811 "is_configured": false, 00:09:47.811 "data_offset": 0, 00:09:47.811 "data_size": 65536 00:09:47.811 }, 00:09:47.811 { 00:09:47.811 "name": "BaseBdev3", 00:09:47.811 "uuid": "6975267c-d62e-42ac-801c-667250ebd35c", 00:09:47.811 "is_configured": true, 00:09:47.811 "data_offset": 0, 00:09:47.811 "data_size": 65536 00:09:47.811 } 00:09:47.811 ] 00:09:47.811 }' 00:09:47.811 10:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.811 10:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.378 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:48.378 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.378 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.378 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.378 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.378 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:48.378 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:48.378 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.378 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.378 [2024-11-15 10:54:55.064935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:48.378 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.378 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:48.378 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.378 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.378 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.378 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.378 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.378 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.378 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.378 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.378 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.378 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.378 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.378 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.378 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.378 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.378 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.378 "name": "Existed_Raid", 00:09:48.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.378 "strip_size_kb": 0, 00:09:48.378 "state": "configuring", 00:09:48.378 "raid_level": "raid1", 00:09:48.378 "superblock": false, 00:09:48.378 "num_base_bdevs": 3, 00:09:48.378 "num_base_bdevs_discovered": 2, 00:09:48.378 "num_base_bdevs_operational": 3, 00:09:48.378 "base_bdevs_list": [ 00:09:48.378 { 00:09:48.378 "name": null, 00:09:48.378 "uuid": "854c9ab9-c064-4c0e-b574-2fbc2dd8ab33", 00:09:48.378 "is_configured": false, 00:09:48.378 "data_offset": 0, 00:09:48.378 "data_size": 65536 00:09:48.378 }, 00:09:48.378 { 00:09:48.378 "name": "BaseBdev2", 00:09:48.378 "uuid": "b5c353a9-89d8-4b35-991a-121713628432", 00:09:48.378 "is_configured": true, 00:09:48.378 "data_offset": 0, 00:09:48.378 "data_size": 65536 00:09:48.378 }, 00:09:48.378 { 00:09:48.378 "name": "BaseBdev3", 00:09:48.378 "uuid": "6975267c-d62e-42ac-801c-667250ebd35c", 00:09:48.378 "is_configured": true, 00:09:48.378 "data_offset": 0, 00:09:48.378 "data_size": 65536 00:09:48.378 } 00:09:48.378 ] 00:09:48.378 }' 00:09:48.378 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.378 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.637 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.637 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:48.637 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.637 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.637 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.896 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 854c9ab9-c064-4c0e-b574-2fbc2dd8ab33 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.897 [2024-11-15 10:54:55.655489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:48.897 [2024-11-15 10:54:55.655542] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:48.897 [2024-11-15 10:54:55.655551] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:48.897 [2024-11-15 10:54:55.655819] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:48.897 [2024-11-15 10:54:55.656004] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:48.897 [2024-11-15 10:54:55.656019] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:48.897 [2024-11-15 10:54:55.656318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.897 NewBaseBdev 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.897 [ 00:09:48.897 { 00:09:48.897 "name": "NewBaseBdev", 00:09:48.897 "aliases": [ 00:09:48.897 "854c9ab9-c064-4c0e-b574-2fbc2dd8ab33" 00:09:48.897 ], 00:09:48.897 "product_name": "Malloc disk", 00:09:48.897 "block_size": 512, 00:09:48.897 "num_blocks": 65536, 00:09:48.897 "uuid": "854c9ab9-c064-4c0e-b574-2fbc2dd8ab33", 00:09:48.897 "assigned_rate_limits": { 00:09:48.897 "rw_ios_per_sec": 0, 00:09:48.897 "rw_mbytes_per_sec": 0, 00:09:48.897 "r_mbytes_per_sec": 0, 00:09:48.897 "w_mbytes_per_sec": 0 00:09:48.897 }, 00:09:48.897 "claimed": true, 00:09:48.897 "claim_type": "exclusive_write", 00:09:48.897 "zoned": false, 00:09:48.897 "supported_io_types": { 00:09:48.897 "read": true, 00:09:48.897 "write": true, 00:09:48.897 "unmap": true, 00:09:48.897 "flush": true, 00:09:48.897 "reset": true, 00:09:48.897 "nvme_admin": false, 00:09:48.897 "nvme_io": false, 00:09:48.897 "nvme_io_md": false, 00:09:48.897 "write_zeroes": true, 00:09:48.897 "zcopy": true, 00:09:48.897 "get_zone_info": false, 00:09:48.897 "zone_management": false, 00:09:48.897 "zone_append": false, 00:09:48.897 "compare": false, 00:09:48.897 "compare_and_write": false, 00:09:48.897 "abort": true, 00:09:48.897 "seek_hole": false, 00:09:48.897 "seek_data": false, 00:09:48.897 "copy": true, 00:09:48.897 "nvme_iov_md": false 00:09:48.897 }, 00:09:48.897 "memory_domains": [ 00:09:48.897 { 00:09:48.897 "dma_device_id": "system", 00:09:48.897 "dma_device_type": 1 00:09:48.897 }, 00:09:48.897 { 00:09:48.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.897 "dma_device_type": 2 00:09:48.897 } 00:09:48.897 ], 00:09:48.897 "driver_specific": {} 00:09:48.897 } 00:09:48.897 ] 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.897 "name": "Existed_Raid", 00:09:48.897 "uuid": "58fae50a-1af0-424a-bf4f-b52e436c2cb3", 00:09:48.897 "strip_size_kb": 0, 00:09:48.897 "state": "online", 00:09:48.897 "raid_level": "raid1", 00:09:48.897 "superblock": false, 00:09:48.897 "num_base_bdevs": 3, 00:09:48.897 "num_base_bdevs_discovered": 3, 00:09:48.897 "num_base_bdevs_operational": 3, 00:09:48.897 "base_bdevs_list": [ 00:09:48.897 { 00:09:48.897 "name": "NewBaseBdev", 00:09:48.897 "uuid": "854c9ab9-c064-4c0e-b574-2fbc2dd8ab33", 00:09:48.897 "is_configured": true, 00:09:48.897 "data_offset": 0, 00:09:48.897 "data_size": 65536 00:09:48.897 }, 00:09:48.897 { 00:09:48.897 "name": "BaseBdev2", 00:09:48.897 "uuid": "b5c353a9-89d8-4b35-991a-121713628432", 00:09:48.897 "is_configured": true, 00:09:48.897 "data_offset": 0, 00:09:48.897 "data_size": 65536 00:09:48.897 }, 00:09:48.897 { 00:09:48.897 "name": "BaseBdev3", 00:09:48.897 "uuid": "6975267c-d62e-42ac-801c-667250ebd35c", 00:09:48.897 "is_configured": true, 00:09:48.897 "data_offset": 0, 00:09:48.897 "data_size": 65536 00:09:48.897 } 00:09:48.897 ] 00:09:48.897 }' 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.897 10:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.466 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:49.466 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:49.466 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:49.466 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:49.466 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:49.466 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:49.466 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:49.466 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:49.466 10:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.466 10:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.466 [2024-11-15 10:54:56.131074] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.466 10:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.466 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:49.466 "name": "Existed_Raid", 00:09:49.466 "aliases": [ 00:09:49.466 "58fae50a-1af0-424a-bf4f-b52e436c2cb3" 00:09:49.466 ], 00:09:49.466 "product_name": "Raid Volume", 00:09:49.466 "block_size": 512, 00:09:49.466 "num_blocks": 65536, 00:09:49.466 "uuid": "58fae50a-1af0-424a-bf4f-b52e436c2cb3", 00:09:49.466 "assigned_rate_limits": { 00:09:49.466 "rw_ios_per_sec": 0, 00:09:49.466 "rw_mbytes_per_sec": 0, 00:09:49.466 "r_mbytes_per_sec": 0, 00:09:49.466 "w_mbytes_per_sec": 0 00:09:49.466 }, 00:09:49.466 "claimed": false, 00:09:49.466 "zoned": false, 00:09:49.466 "supported_io_types": { 00:09:49.466 "read": true, 00:09:49.466 "write": true, 00:09:49.466 "unmap": false, 00:09:49.466 "flush": false, 00:09:49.466 "reset": true, 00:09:49.466 "nvme_admin": false, 00:09:49.466 "nvme_io": false, 00:09:49.466 "nvme_io_md": false, 00:09:49.466 "write_zeroes": true, 00:09:49.466 "zcopy": false, 00:09:49.466 "get_zone_info": false, 00:09:49.466 "zone_management": false, 00:09:49.466 "zone_append": false, 00:09:49.466 "compare": false, 00:09:49.466 "compare_and_write": false, 00:09:49.466 "abort": false, 00:09:49.466 "seek_hole": false, 00:09:49.466 "seek_data": false, 00:09:49.466 "copy": false, 00:09:49.466 "nvme_iov_md": false 00:09:49.466 }, 00:09:49.466 "memory_domains": [ 00:09:49.466 { 00:09:49.466 "dma_device_id": "system", 00:09:49.466 "dma_device_type": 1 00:09:49.466 }, 00:09:49.466 { 00:09:49.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.466 "dma_device_type": 2 00:09:49.466 }, 00:09:49.466 { 00:09:49.466 "dma_device_id": "system", 00:09:49.466 "dma_device_type": 1 00:09:49.466 }, 00:09:49.466 { 00:09:49.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.466 "dma_device_type": 2 00:09:49.466 }, 00:09:49.466 { 00:09:49.466 "dma_device_id": "system", 00:09:49.466 "dma_device_type": 1 00:09:49.466 }, 00:09:49.466 { 00:09:49.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.466 "dma_device_type": 2 00:09:49.466 } 00:09:49.467 ], 00:09:49.467 "driver_specific": { 00:09:49.467 "raid": { 00:09:49.467 "uuid": "58fae50a-1af0-424a-bf4f-b52e436c2cb3", 00:09:49.467 "strip_size_kb": 0, 00:09:49.467 "state": "online", 00:09:49.467 "raid_level": "raid1", 00:09:49.467 "superblock": false, 00:09:49.467 "num_base_bdevs": 3, 00:09:49.467 "num_base_bdevs_discovered": 3, 00:09:49.467 "num_base_bdevs_operational": 3, 00:09:49.467 "base_bdevs_list": [ 00:09:49.467 { 00:09:49.467 "name": "NewBaseBdev", 00:09:49.467 "uuid": "854c9ab9-c064-4c0e-b574-2fbc2dd8ab33", 00:09:49.467 "is_configured": true, 00:09:49.467 "data_offset": 0, 00:09:49.467 "data_size": 65536 00:09:49.467 }, 00:09:49.467 { 00:09:49.467 "name": "BaseBdev2", 00:09:49.467 "uuid": "b5c353a9-89d8-4b35-991a-121713628432", 00:09:49.467 "is_configured": true, 00:09:49.467 "data_offset": 0, 00:09:49.467 "data_size": 65536 00:09:49.467 }, 00:09:49.467 { 00:09:49.467 "name": "BaseBdev3", 00:09:49.467 "uuid": "6975267c-d62e-42ac-801c-667250ebd35c", 00:09:49.467 "is_configured": true, 00:09:49.467 "data_offset": 0, 00:09:49.467 "data_size": 65536 00:09:49.467 } 00:09:49.467 ] 00:09:49.467 } 00:09:49.467 } 00:09:49.467 }' 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:49.467 BaseBdev2 00:09:49.467 BaseBdev3' 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.467 [2024-11-15 10:54:56.386320] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:49.467 [2024-11-15 10:54:56.386404] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:49.467 [2024-11-15 10:54:56.386520] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:49.467 [2024-11-15 10:54:56.386862] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:49.467 [2024-11-15 10:54:56.386923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67543 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 67543 ']' 00:09:49.467 10:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 67543 00:09:49.727 10:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:09:49.727 10:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:49.727 10:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67543 00:09:49.727 killing process with pid 67543 00:09:49.727 10:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:49.727 10:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:49.727 10:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67543' 00:09:49.727 10:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 67543 00:09:49.727 [2024-11-15 10:54:56.430131] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:49.727 10:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 67543 00:09:49.987 [2024-11-15 10:54:56.744790] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:51.366 00:09:51.366 real 0m10.674s 00:09:51.366 user 0m16.926s 00:09:51.366 sys 0m1.885s 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.366 ************************************ 00:09:51.366 END TEST raid_state_function_test 00:09:51.366 ************************************ 00:09:51.366 10:54:57 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:51.366 10:54:57 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:51.366 10:54:57 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:51.366 10:54:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:51.366 ************************************ 00:09:51.366 START TEST raid_state_function_test_sb 00:09:51.366 ************************************ 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 true 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68170 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:51.366 Process raid pid: 68170 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68170' 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68170 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 68170 ']' 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:51.366 10:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.366 [2024-11-15 10:54:58.087017] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:09:51.366 [2024-11-15 10:54:58.087212] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.366 [2024-11-15 10:54:58.262213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.625 [2024-11-15 10:54:58.378471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.884 [2024-11-15 10:54:58.583555] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.884 [2024-11-15 10:54:58.583673] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.142 10:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:52.142 10:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:09:52.142 10:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:52.142 10:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.142 10:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.142 [2024-11-15 10:54:58.957774] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:52.142 [2024-11-15 10:54:58.957829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:52.142 [2024-11-15 10:54:58.957839] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:52.142 [2024-11-15 10:54:58.957848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:52.142 [2024-11-15 10:54:58.957855] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:52.142 [2024-11-15 10:54:58.957863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:52.142 10:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.142 10:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:52.142 10:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.142 10:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.142 10:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.142 10:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.142 10:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.142 10:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.142 10:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.143 10:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.143 10:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.143 10:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.143 10:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.143 10:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.143 10:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.143 10:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.143 10:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.143 "name": "Existed_Raid", 00:09:52.143 "uuid": "8a6b3804-d052-4d31-8638-b4d11f16703c", 00:09:52.143 "strip_size_kb": 0, 00:09:52.143 "state": "configuring", 00:09:52.143 "raid_level": "raid1", 00:09:52.143 "superblock": true, 00:09:52.143 "num_base_bdevs": 3, 00:09:52.143 "num_base_bdevs_discovered": 0, 00:09:52.143 "num_base_bdevs_operational": 3, 00:09:52.143 "base_bdevs_list": [ 00:09:52.143 { 00:09:52.143 "name": "BaseBdev1", 00:09:52.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.143 "is_configured": false, 00:09:52.143 "data_offset": 0, 00:09:52.143 "data_size": 0 00:09:52.143 }, 00:09:52.143 { 00:09:52.143 "name": "BaseBdev2", 00:09:52.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.143 "is_configured": false, 00:09:52.143 "data_offset": 0, 00:09:52.143 "data_size": 0 00:09:52.143 }, 00:09:52.143 { 00:09:52.143 "name": "BaseBdev3", 00:09:52.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.143 "is_configured": false, 00:09:52.143 "data_offset": 0, 00:09:52.143 "data_size": 0 00:09:52.143 } 00:09:52.143 ] 00:09:52.143 }' 00:09:52.143 10:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.143 10:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.721 [2024-11-15 10:54:59.424920] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:52.721 [2024-11-15 10:54:59.425022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.721 [2024-11-15 10:54:59.432888] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:52.721 [2024-11-15 10:54:59.432931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:52.721 [2024-11-15 10:54:59.432940] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:52.721 [2024-11-15 10:54:59.432950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:52.721 [2024-11-15 10:54:59.432956] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:52.721 [2024-11-15 10:54:59.432966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.721 [2024-11-15 10:54:59.477163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:52.721 BaseBdev1 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.721 [ 00:09:52.721 { 00:09:52.721 "name": "BaseBdev1", 00:09:52.721 "aliases": [ 00:09:52.721 "4a19248a-1ef5-4deb-90a6-1be56056b50e" 00:09:52.721 ], 00:09:52.721 "product_name": "Malloc disk", 00:09:52.721 "block_size": 512, 00:09:52.721 "num_blocks": 65536, 00:09:52.721 "uuid": "4a19248a-1ef5-4deb-90a6-1be56056b50e", 00:09:52.721 "assigned_rate_limits": { 00:09:52.721 "rw_ios_per_sec": 0, 00:09:52.721 "rw_mbytes_per_sec": 0, 00:09:52.721 "r_mbytes_per_sec": 0, 00:09:52.721 "w_mbytes_per_sec": 0 00:09:52.721 }, 00:09:52.721 "claimed": true, 00:09:52.721 "claim_type": "exclusive_write", 00:09:52.721 "zoned": false, 00:09:52.721 "supported_io_types": { 00:09:52.721 "read": true, 00:09:52.721 "write": true, 00:09:52.721 "unmap": true, 00:09:52.721 "flush": true, 00:09:52.721 "reset": true, 00:09:52.721 "nvme_admin": false, 00:09:52.721 "nvme_io": false, 00:09:52.721 "nvme_io_md": false, 00:09:52.721 "write_zeroes": true, 00:09:52.721 "zcopy": true, 00:09:52.721 "get_zone_info": false, 00:09:52.721 "zone_management": false, 00:09:52.721 "zone_append": false, 00:09:52.721 "compare": false, 00:09:52.721 "compare_and_write": false, 00:09:52.721 "abort": true, 00:09:52.721 "seek_hole": false, 00:09:52.721 "seek_data": false, 00:09:52.721 "copy": true, 00:09:52.721 "nvme_iov_md": false 00:09:52.721 }, 00:09:52.721 "memory_domains": [ 00:09:52.721 { 00:09:52.721 "dma_device_id": "system", 00:09:52.721 "dma_device_type": 1 00:09:52.721 }, 00:09:52.721 { 00:09:52.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.721 "dma_device_type": 2 00:09:52.721 } 00:09:52.721 ], 00:09:52.721 "driver_specific": {} 00:09:52.721 } 00:09:52.721 ] 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.721 "name": "Existed_Raid", 00:09:52.721 "uuid": "232ebd3d-6eba-4ad3-a712-2832375926bd", 00:09:52.721 "strip_size_kb": 0, 00:09:52.721 "state": "configuring", 00:09:52.721 "raid_level": "raid1", 00:09:52.721 "superblock": true, 00:09:52.721 "num_base_bdevs": 3, 00:09:52.721 "num_base_bdevs_discovered": 1, 00:09:52.721 "num_base_bdevs_operational": 3, 00:09:52.721 "base_bdevs_list": [ 00:09:52.721 { 00:09:52.721 "name": "BaseBdev1", 00:09:52.721 "uuid": "4a19248a-1ef5-4deb-90a6-1be56056b50e", 00:09:52.721 "is_configured": true, 00:09:52.721 "data_offset": 2048, 00:09:52.721 "data_size": 63488 00:09:52.721 }, 00:09:52.721 { 00:09:52.721 "name": "BaseBdev2", 00:09:52.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.721 "is_configured": false, 00:09:52.721 "data_offset": 0, 00:09:52.721 "data_size": 0 00:09:52.721 }, 00:09:52.721 { 00:09:52.721 "name": "BaseBdev3", 00:09:52.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.721 "is_configured": false, 00:09:52.721 "data_offset": 0, 00:09:52.721 "data_size": 0 00:09:52.721 } 00:09:52.721 ] 00:09:52.721 }' 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.721 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.289 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:53.289 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.289 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.289 [2024-11-15 10:54:59.960383] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:53.289 [2024-11-15 10:54:59.960496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:53.289 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.289 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:53.289 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.289 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.289 [2024-11-15 10:54:59.968416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:53.289 [2024-11-15 10:54:59.970244] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:53.289 [2024-11-15 10:54:59.970289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:53.289 [2024-11-15 10:54:59.970308] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:53.289 [2024-11-15 10:54:59.970318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:53.289 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.289 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:53.289 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:53.289 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:53.289 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.289 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.289 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.289 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.289 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.289 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.289 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.289 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.289 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.289 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.289 10:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.289 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.289 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.289 10:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.289 10:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.289 "name": "Existed_Raid", 00:09:53.289 "uuid": "6c13b5ca-4441-4766-a44c-487611746999", 00:09:53.289 "strip_size_kb": 0, 00:09:53.289 "state": "configuring", 00:09:53.289 "raid_level": "raid1", 00:09:53.289 "superblock": true, 00:09:53.289 "num_base_bdevs": 3, 00:09:53.289 "num_base_bdevs_discovered": 1, 00:09:53.289 "num_base_bdevs_operational": 3, 00:09:53.289 "base_bdevs_list": [ 00:09:53.289 { 00:09:53.289 "name": "BaseBdev1", 00:09:53.289 "uuid": "4a19248a-1ef5-4deb-90a6-1be56056b50e", 00:09:53.289 "is_configured": true, 00:09:53.289 "data_offset": 2048, 00:09:53.289 "data_size": 63488 00:09:53.289 }, 00:09:53.289 { 00:09:53.289 "name": "BaseBdev2", 00:09:53.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.289 "is_configured": false, 00:09:53.289 "data_offset": 0, 00:09:53.289 "data_size": 0 00:09:53.289 }, 00:09:53.289 { 00:09:53.289 "name": "BaseBdev3", 00:09:53.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.289 "is_configured": false, 00:09:53.289 "data_offset": 0, 00:09:53.289 "data_size": 0 00:09:53.289 } 00:09:53.289 ] 00:09:53.289 }' 00:09:53.289 10:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.289 10:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.548 10:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:53.548 10:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.548 10:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.807 [2024-11-15 10:55:00.478820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:53.807 BaseBdev2 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.807 [ 00:09:53.807 { 00:09:53.807 "name": "BaseBdev2", 00:09:53.807 "aliases": [ 00:09:53.807 "63b136fe-4e9c-41e6-83e5-b31389f2169e" 00:09:53.807 ], 00:09:53.807 "product_name": "Malloc disk", 00:09:53.807 "block_size": 512, 00:09:53.807 "num_blocks": 65536, 00:09:53.807 "uuid": "63b136fe-4e9c-41e6-83e5-b31389f2169e", 00:09:53.807 "assigned_rate_limits": { 00:09:53.807 "rw_ios_per_sec": 0, 00:09:53.807 "rw_mbytes_per_sec": 0, 00:09:53.807 "r_mbytes_per_sec": 0, 00:09:53.807 "w_mbytes_per_sec": 0 00:09:53.807 }, 00:09:53.807 "claimed": true, 00:09:53.807 "claim_type": "exclusive_write", 00:09:53.807 "zoned": false, 00:09:53.807 "supported_io_types": { 00:09:53.807 "read": true, 00:09:53.807 "write": true, 00:09:53.807 "unmap": true, 00:09:53.807 "flush": true, 00:09:53.807 "reset": true, 00:09:53.807 "nvme_admin": false, 00:09:53.807 "nvme_io": false, 00:09:53.807 "nvme_io_md": false, 00:09:53.807 "write_zeroes": true, 00:09:53.807 "zcopy": true, 00:09:53.807 "get_zone_info": false, 00:09:53.807 "zone_management": false, 00:09:53.807 "zone_append": false, 00:09:53.807 "compare": false, 00:09:53.807 "compare_and_write": false, 00:09:53.807 "abort": true, 00:09:53.807 "seek_hole": false, 00:09:53.807 "seek_data": false, 00:09:53.807 "copy": true, 00:09:53.807 "nvme_iov_md": false 00:09:53.807 }, 00:09:53.807 "memory_domains": [ 00:09:53.807 { 00:09:53.807 "dma_device_id": "system", 00:09:53.807 "dma_device_type": 1 00:09:53.807 }, 00:09:53.807 { 00:09:53.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.807 "dma_device_type": 2 00:09:53.807 } 00:09:53.807 ], 00:09:53.807 "driver_specific": {} 00:09:53.807 } 00:09:53.807 ] 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.807 "name": "Existed_Raid", 00:09:53.807 "uuid": "6c13b5ca-4441-4766-a44c-487611746999", 00:09:53.807 "strip_size_kb": 0, 00:09:53.807 "state": "configuring", 00:09:53.807 "raid_level": "raid1", 00:09:53.807 "superblock": true, 00:09:53.807 "num_base_bdevs": 3, 00:09:53.807 "num_base_bdevs_discovered": 2, 00:09:53.807 "num_base_bdevs_operational": 3, 00:09:53.807 "base_bdevs_list": [ 00:09:53.807 { 00:09:53.807 "name": "BaseBdev1", 00:09:53.807 "uuid": "4a19248a-1ef5-4deb-90a6-1be56056b50e", 00:09:53.807 "is_configured": true, 00:09:53.807 "data_offset": 2048, 00:09:53.807 "data_size": 63488 00:09:53.807 }, 00:09:53.807 { 00:09:53.807 "name": "BaseBdev2", 00:09:53.807 "uuid": "63b136fe-4e9c-41e6-83e5-b31389f2169e", 00:09:53.807 "is_configured": true, 00:09:53.807 "data_offset": 2048, 00:09:53.807 "data_size": 63488 00:09:53.807 }, 00:09:53.807 { 00:09:53.807 "name": "BaseBdev3", 00:09:53.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.807 "is_configured": false, 00:09:53.807 "data_offset": 0, 00:09:53.807 "data_size": 0 00:09:53.807 } 00:09:53.807 ] 00:09:53.807 }' 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.807 10:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.066 10:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:54.066 10:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.066 10:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.324 [2024-11-15 10:55:01.019372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:54.324 [2024-11-15 10:55:01.019713] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:54.324 [2024-11-15 10:55:01.019741] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:54.324 [2024-11-15 10:55:01.020021] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:54.324 BaseBdev3 00:09:54.324 [2024-11-15 10:55:01.020167] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:54.324 [2024-11-15 10:55:01.020181] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:54.324 [2024-11-15 10:55:01.020355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.324 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.324 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:54.324 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:54.324 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:54.324 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:54.324 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:54.324 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:54.324 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:54.324 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.324 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.324 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.324 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:54.324 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.324 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.324 [ 00:09:54.324 { 00:09:54.324 "name": "BaseBdev3", 00:09:54.324 "aliases": [ 00:09:54.324 "45fa94a4-c05a-4c2b-8618-361ccc2855e9" 00:09:54.324 ], 00:09:54.324 "product_name": "Malloc disk", 00:09:54.324 "block_size": 512, 00:09:54.324 "num_blocks": 65536, 00:09:54.324 "uuid": "45fa94a4-c05a-4c2b-8618-361ccc2855e9", 00:09:54.324 "assigned_rate_limits": { 00:09:54.324 "rw_ios_per_sec": 0, 00:09:54.325 "rw_mbytes_per_sec": 0, 00:09:54.325 "r_mbytes_per_sec": 0, 00:09:54.325 "w_mbytes_per_sec": 0 00:09:54.325 }, 00:09:54.325 "claimed": true, 00:09:54.325 "claim_type": "exclusive_write", 00:09:54.325 "zoned": false, 00:09:54.325 "supported_io_types": { 00:09:54.325 "read": true, 00:09:54.325 "write": true, 00:09:54.325 "unmap": true, 00:09:54.325 "flush": true, 00:09:54.325 "reset": true, 00:09:54.325 "nvme_admin": false, 00:09:54.325 "nvme_io": false, 00:09:54.325 "nvme_io_md": false, 00:09:54.325 "write_zeroes": true, 00:09:54.325 "zcopy": true, 00:09:54.325 "get_zone_info": false, 00:09:54.325 "zone_management": false, 00:09:54.325 "zone_append": false, 00:09:54.325 "compare": false, 00:09:54.325 "compare_and_write": false, 00:09:54.325 "abort": true, 00:09:54.325 "seek_hole": false, 00:09:54.325 "seek_data": false, 00:09:54.325 "copy": true, 00:09:54.325 "nvme_iov_md": false 00:09:54.325 }, 00:09:54.325 "memory_domains": [ 00:09:54.325 { 00:09:54.325 "dma_device_id": "system", 00:09:54.325 "dma_device_type": 1 00:09:54.325 }, 00:09:54.325 { 00:09:54.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.325 "dma_device_type": 2 00:09:54.325 } 00:09:54.325 ], 00:09:54.325 "driver_specific": {} 00:09:54.325 } 00:09:54.325 ] 00:09:54.325 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.325 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:54.325 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:54.325 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:54.325 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:54.325 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.325 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.325 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.325 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.325 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.325 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.325 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.325 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.325 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.325 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.325 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.325 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.325 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.325 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.325 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.325 "name": "Existed_Raid", 00:09:54.325 "uuid": "6c13b5ca-4441-4766-a44c-487611746999", 00:09:54.325 "strip_size_kb": 0, 00:09:54.325 "state": "online", 00:09:54.325 "raid_level": "raid1", 00:09:54.325 "superblock": true, 00:09:54.325 "num_base_bdevs": 3, 00:09:54.325 "num_base_bdevs_discovered": 3, 00:09:54.325 "num_base_bdevs_operational": 3, 00:09:54.325 "base_bdevs_list": [ 00:09:54.325 { 00:09:54.325 "name": "BaseBdev1", 00:09:54.325 "uuid": "4a19248a-1ef5-4deb-90a6-1be56056b50e", 00:09:54.325 "is_configured": true, 00:09:54.325 "data_offset": 2048, 00:09:54.325 "data_size": 63488 00:09:54.325 }, 00:09:54.325 { 00:09:54.325 "name": "BaseBdev2", 00:09:54.325 "uuid": "63b136fe-4e9c-41e6-83e5-b31389f2169e", 00:09:54.325 "is_configured": true, 00:09:54.325 "data_offset": 2048, 00:09:54.325 "data_size": 63488 00:09:54.325 }, 00:09:54.325 { 00:09:54.325 "name": "BaseBdev3", 00:09:54.325 "uuid": "45fa94a4-c05a-4c2b-8618-361ccc2855e9", 00:09:54.325 "is_configured": true, 00:09:54.325 "data_offset": 2048, 00:09:54.325 "data_size": 63488 00:09:54.325 } 00:09:54.325 ] 00:09:54.325 }' 00:09:54.325 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.325 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.584 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:54.584 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:54.584 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:54.584 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:54.584 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:54.584 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:54.584 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:54.584 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.584 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.584 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:54.584 [2024-11-15 10:55:01.494908] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:54.584 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.843 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:54.843 "name": "Existed_Raid", 00:09:54.843 "aliases": [ 00:09:54.844 "6c13b5ca-4441-4766-a44c-487611746999" 00:09:54.844 ], 00:09:54.844 "product_name": "Raid Volume", 00:09:54.844 "block_size": 512, 00:09:54.844 "num_blocks": 63488, 00:09:54.844 "uuid": "6c13b5ca-4441-4766-a44c-487611746999", 00:09:54.844 "assigned_rate_limits": { 00:09:54.844 "rw_ios_per_sec": 0, 00:09:54.844 "rw_mbytes_per_sec": 0, 00:09:54.844 "r_mbytes_per_sec": 0, 00:09:54.844 "w_mbytes_per_sec": 0 00:09:54.844 }, 00:09:54.844 "claimed": false, 00:09:54.844 "zoned": false, 00:09:54.844 "supported_io_types": { 00:09:54.844 "read": true, 00:09:54.844 "write": true, 00:09:54.844 "unmap": false, 00:09:54.844 "flush": false, 00:09:54.844 "reset": true, 00:09:54.844 "nvme_admin": false, 00:09:54.844 "nvme_io": false, 00:09:54.844 "nvme_io_md": false, 00:09:54.844 "write_zeroes": true, 00:09:54.844 "zcopy": false, 00:09:54.844 "get_zone_info": false, 00:09:54.844 "zone_management": false, 00:09:54.844 "zone_append": false, 00:09:54.844 "compare": false, 00:09:54.844 "compare_and_write": false, 00:09:54.844 "abort": false, 00:09:54.844 "seek_hole": false, 00:09:54.844 "seek_data": false, 00:09:54.844 "copy": false, 00:09:54.844 "nvme_iov_md": false 00:09:54.844 }, 00:09:54.844 "memory_domains": [ 00:09:54.844 { 00:09:54.844 "dma_device_id": "system", 00:09:54.844 "dma_device_type": 1 00:09:54.844 }, 00:09:54.844 { 00:09:54.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.844 "dma_device_type": 2 00:09:54.844 }, 00:09:54.844 { 00:09:54.844 "dma_device_id": "system", 00:09:54.844 "dma_device_type": 1 00:09:54.844 }, 00:09:54.844 { 00:09:54.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.844 "dma_device_type": 2 00:09:54.844 }, 00:09:54.844 { 00:09:54.844 "dma_device_id": "system", 00:09:54.844 "dma_device_type": 1 00:09:54.844 }, 00:09:54.844 { 00:09:54.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.844 "dma_device_type": 2 00:09:54.844 } 00:09:54.844 ], 00:09:54.844 "driver_specific": { 00:09:54.844 "raid": { 00:09:54.844 "uuid": "6c13b5ca-4441-4766-a44c-487611746999", 00:09:54.844 "strip_size_kb": 0, 00:09:54.844 "state": "online", 00:09:54.844 "raid_level": "raid1", 00:09:54.844 "superblock": true, 00:09:54.844 "num_base_bdevs": 3, 00:09:54.844 "num_base_bdevs_discovered": 3, 00:09:54.844 "num_base_bdevs_operational": 3, 00:09:54.844 "base_bdevs_list": [ 00:09:54.844 { 00:09:54.844 "name": "BaseBdev1", 00:09:54.844 "uuid": "4a19248a-1ef5-4deb-90a6-1be56056b50e", 00:09:54.844 "is_configured": true, 00:09:54.844 "data_offset": 2048, 00:09:54.844 "data_size": 63488 00:09:54.844 }, 00:09:54.844 { 00:09:54.844 "name": "BaseBdev2", 00:09:54.844 "uuid": "63b136fe-4e9c-41e6-83e5-b31389f2169e", 00:09:54.844 "is_configured": true, 00:09:54.844 "data_offset": 2048, 00:09:54.844 "data_size": 63488 00:09:54.844 }, 00:09:54.844 { 00:09:54.844 "name": "BaseBdev3", 00:09:54.844 "uuid": "45fa94a4-c05a-4c2b-8618-361ccc2855e9", 00:09:54.844 "is_configured": true, 00:09:54.844 "data_offset": 2048, 00:09:54.844 "data_size": 63488 00:09:54.844 } 00:09:54.844 ] 00:09:54.844 } 00:09:54.844 } 00:09:54.844 }' 00:09:54.844 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:54.844 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:54.844 BaseBdev2 00:09:54.844 BaseBdev3' 00:09:54.844 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.844 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:54.844 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.844 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:54.844 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.844 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.844 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.844 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.844 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.844 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.844 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.844 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:54.844 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.844 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.844 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.844 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.844 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.844 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.844 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.844 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.844 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:54.844 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.844 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.844 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.103 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.103 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.103 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:55.103 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.103 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.103 [2024-11-15 10:55:01.790147] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:55.103 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.103 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:55.103 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:55.103 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:55.103 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:55.104 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:55.104 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:55.104 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.104 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.104 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.104 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.104 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:55.104 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.104 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.104 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.104 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.104 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.104 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.104 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.104 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.104 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.104 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.104 "name": "Existed_Raid", 00:09:55.104 "uuid": "6c13b5ca-4441-4766-a44c-487611746999", 00:09:55.104 "strip_size_kb": 0, 00:09:55.104 "state": "online", 00:09:55.104 "raid_level": "raid1", 00:09:55.104 "superblock": true, 00:09:55.104 "num_base_bdevs": 3, 00:09:55.104 "num_base_bdevs_discovered": 2, 00:09:55.104 "num_base_bdevs_operational": 2, 00:09:55.104 "base_bdevs_list": [ 00:09:55.104 { 00:09:55.104 "name": null, 00:09:55.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.104 "is_configured": false, 00:09:55.104 "data_offset": 0, 00:09:55.104 "data_size": 63488 00:09:55.104 }, 00:09:55.104 { 00:09:55.104 "name": "BaseBdev2", 00:09:55.104 "uuid": "63b136fe-4e9c-41e6-83e5-b31389f2169e", 00:09:55.104 "is_configured": true, 00:09:55.104 "data_offset": 2048, 00:09:55.104 "data_size": 63488 00:09:55.104 }, 00:09:55.104 { 00:09:55.104 "name": "BaseBdev3", 00:09:55.104 "uuid": "45fa94a4-c05a-4c2b-8618-361ccc2855e9", 00:09:55.104 "is_configured": true, 00:09:55.104 "data_offset": 2048, 00:09:55.104 "data_size": 63488 00:09:55.104 } 00:09:55.104 ] 00:09:55.104 }' 00:09:55.104 10:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.104 10:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.672 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:55.672 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:55.672 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.672 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:55.672 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.672 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.672 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.672 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:55.672 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:55.672 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:55.672 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.672 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.672 [2024-11-15 10:55:02.412891] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:55.672 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.672 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:55.672 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:55.672 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.672 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:55.672 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.672 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.672 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.672 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:55.672 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:55.672 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:55.672 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.672 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.672 [2024-11-15 10:55:02.567808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:55.672 [2024-11-15 10:55:02.567997] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:55.931 [2024-11-15 10:55:02.666557] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:55.931 [2024-11-15 10:55:02.666695] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:55.931 [2024-11-15 10:55:02.666752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:55.931 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.931 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:55.931 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:55.931 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.931 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.931 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.931 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:55.931 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.931 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:55.931 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.932 BaseBdev2 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.932 [ 00:09:55.932 { 00:09:55.932 "name": "BaseBdev2", 00:09:55.932 "aliases": [ 00:09:55.932 "ec57a3ea-0c60-4c77-ad3c-e8526bbdb7e1" 00:09:55.932 ], 00:09:55.932 "product_name": "Malloc disk", 00:09:55.932 "block_size": 512, 00:09:55.932 "num_blocks": 65536, 00:09:55.932 "uuid": "ec57a3ea-0c60-4c77-ad3c-e8526bbdb7e1", 00:09:55.932 "assigned_rate_limits": { 00:09:55.932 "rw_ios_per_sec": 0, 00:09:55.932 "rw_mbytes_per_sec": 0, 00:09:55.932 "r_mbytes_per_sec": 0, 00:09:55.932 "w_mbytes_per_sec": 0 00:09:55.932 }, 00:09:55.932 "claimed": false, 00:09:55.932 "zoned": false, 00:09:55.932 "supported_io_types": { 00:09:55.932 "read": true, 00:09:55.932 "write": true, 00:09:55.932 "unmap": true, 00:09:55.932 "flush": true, 00:09:55.932 "reset": true, 00:09:55.932 "nvme_admin": false, 00:09:55.932 "nvme_io": false, 00:09:55.932 "nvme_io_md": false, 00:09:55.932 "write_zeroes": true, 00:09:55.932 "zcopy": true, 00:09:55.932 "get_zone_info": false, 00:09:55.932 "zone_management": false, 00:09:55.932 "zone_append": false, 00:09:55.932 "compare": false, 00:09:55.932 "compare_and_write": false, 00:09:55.932 "abort": true, 00:09:55.932 "seek_hole": false, 00:09:55.932 "seek_data": false, 00:09:55.932 "copy": true, 00:09:55.932 "nvme_iov_md": false 00:09:55.932 }, 00:09:55.932 "memory_domains": [ 00:09:55.932 { 00:09:55.932 "dma_device_id": "system", 00:09:55.932 "dma_device_type": 1 00:09:55.932 }, 00:09:55.932 { 00:09:55.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.932 "dma_device_type": 2 00:09:55.932 } 00:09:55.932 ], 00:09:55.932 "driver_specific": {} 00:09:55.932 } 00:09:55.932 ] 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.932 BaseBdev3 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.932 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.191 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.191 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:56.191 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.191 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.191 [ 00:09:56.191 { 00:09:56.191 "name": "BaseBdev3", 00:09:56.191 "aliases": [ 00:09:56.191 "2d1bfcc6-1bf2-48f4-85e3-c68f64b44762" 00:09:56.191 ], 00:09:56.191 "product_name": "Malloc disk", 00:09:56.191 "block_size": 512, 00:09:56.191 "num_blocks": 65536, 00:09:56.191 "uuid": "2d1bfcc6-1bf2-48f4-85e3-c68f64b44762", 00:09:56.191 "assigned_rate_limits": { 00:09:56.191 "rw_ios_per_sec": 0, 00:09:56.191 "rw_mbytes_per_sec": 0, 00:09:56.191 "r_mbytes_per_sec": 0, 00:09:56.191 "w_mbytes_per_sec": 0 00:09:56.191 }, 00:09:56.191 "claimed": false, 00:09:56.191 "zoned": false, 00:09:56.191 "supported_io_types": { 00:09:56.191 "read": true, 00:09:56.191 "write": true, 00:09:56.191 "unmap": true, 00:09:56.191 "flush": true, 00:09:56.191 "reset": true, 00:09:56.191 "nvme_admin": false, 00:09:56.191 "nvme_io": false, 00:09:56.191 "nvme_io_md": false, 00:09:56.191 "write_zeroes": true, 00:09:56.191 "zcopy": true, 00:09:56.191 "get_zone_info": false, 00:09:56.191 "zone_management": false, 00:09:56.191 "zone_append": false, 00:09:56.191 "compare": false, 00:09:56.191 "compare_and_write": false, 00:09:56.191 "abort": true, 00:09:56.191 "seek_hole": false, 00:09:56.191 "seek_data": false, 00:09:56.191 "copy": true, 00:09:56.191 "nvme_iov_md": false 00:09:56.191 }, 00:09:56.191 "memory_domains": [ 00:09:56.191 { 00:09:56.191 "dma_device_id": "system", 00:09:56.191 "dma_device_type": 1 00:09:56.191 }, 00:09:56.191 { 00:09:56.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.191 "dma_device_type": 2 00:09:56.191 } 00:09:56.191 ], 00:09:56.191 "driver_specific": {} 00:09:56.191 } 00:09:56.191 ] 00:09:56.191 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.191 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:56.191 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:56.191 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:56.191 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:56.191 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.191 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.191 [2024-11-15 10:55:02.896851] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:56.191 [2024-11-15 10:55:02.896952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:56.191 [2024-11-15 10:55:02.897004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:56.191 [2024-11-15 10:55:02.899082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:56.191 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.191 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:56.191 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.191 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.191 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.191 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.191 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.191 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.191 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.191 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.191 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.191 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.191 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.191 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.191 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.191 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.191 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.191 "name": "Existed_Raid", 00:09:56.192 "uuid": "061e4a8e-cfdd-4680-88e7-29f93159ada5", 00:09:56.192 "strip_size_kb": 0, 00:09:56.192 "state": "configuring", 00:09:56.192 "raid_level": "raid1", 00:09:56.192 "superblock": true, 00:09:56.192 "num_base_bdevs": 3, 00:09:56.192 "num_base_bdevs_discovered": 2, 00:09:56.192 "num_base_bdevs_operational": 3, 00:09:56.192 "base_bdevs_list": [ 00:09:56.192 { 00:09:56.192 "name": "BaseBdev1", 00:09:56.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.192 "is_configured": false, 00:09:56.192 "data_offset": 0, 00:09:56.192 "data_size": 0 00:09:56.192 }, 00:09:56.192 { 00:09:56.192 "name": "BaseBdev2", 00:09:56.192 "uuid": "ec57a3ea-0c60-4c77-ad3c-e8526bbdb7e1", 00:09:56.192 "is_configured": true, 00:09:56.192 "data_offset": 2048, 00:09:56.192 "data_size": 63488 00:09:56.192 }, 00:09:56.192 { 00:09:56.192 "name": "BaseBdev3", 00:09:56.192 "uuid": "2d1bfcc6-1bf2-48f4-85e3-c68f64b44762", 00:09:56.192 "is_configured": true, 00:09:56.192 "data_offset": 2048, 00:09:56.192 "data_size": 63488 00:09:56.192 } 00:09:56.192 ] 00:09:56.192 }' 00:09:56.192 10:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.192 10:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.450 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:56.450 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.450 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.450 [2024-11-15 10:55:03.288185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:56.450 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.450 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:56.450 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.450 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.450 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.450 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.450 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.450 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.450 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.450 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.450 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.450 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.450 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.450 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.450 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.450 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.450 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.450 "name": "Existed_Raid", 00:09:56.450 "uuid": "061e4a8e-cfdd-4680-88e7-29f93159ada5", 00:09:56.450 "strip_size_kb": 0, 00:09:56.450 "state": "configuring", 00:09:56.450 "raid_level": "raid1", 00:09:56.450 "superblock": true, 00:09:56.450 "num_base_bdevs": 3, 00:09:56.450 "num_base_bdevs_discovered": 1, 00:09:56.450 "num_base_bdevs_operational": 3, 00:09:56.450 "base_bdevs_list": [ 00:09:56.450 { 00:09:56.450 "name": "BaseBdev1", 00:09:56.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.451 "is_configured": false, 00:09:56.451 "data_offset": 0, 00:09:56.451 "data_size": 0 00:09:56.451 }, 00:09:56.451 { 00:09:56.451 "name": null, 00:09:56.451 "uuid": "ec57a3ea-0c60-4c77-ad3c-e8526bbdb7e1", 00:09:56.451 "is_configured": false, 00:09:56.451 "data_offset": 0, 00:09:56.451 "data_size": 63488 00:09:56.451 }, 00:09:56.451 { 00:09:56.451 "name": "BaseBdev3", 00:09:56.451 "uuid": "2d1bfcc6-1bf2-48f4-85e3-c68f64b44762", 00:09:56.451 "is_configured": true, 00:09:56.451 "data_offset": 2048, 00:09:56.451 "data_size": 63488 00:09:56.451 } 00:09:56.451 ] 00:09:56.451 }' 00:09:56.451 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.451 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.026 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:57.026 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.026 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.026 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.026 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.026 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:57.026 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:57.026 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.026 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.026 [2024-11-15 10:55:03.817425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:57.026 BaseBdev1 00:09:57.027 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.027 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:57.027 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:57.027 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:57.027 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:57.027 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:57.027 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:57.027 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:57.027 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.027 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.027 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.027 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:57.027 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.027 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.027 [ 00:09:57.027 { 00:09:57.027 "name": "BaseBdev1", 00:09:57.027 "aliases": [ 00:09:57.027 "cf76cd6f-01b0-476a-9537-a9990cb2db84" 00:09:57.027 ], 00:09:57.027 "product_name": "Malloc disk", 00:09:57.027 "block_size": 512, 00:09:57.027 "num_blocks": 65536, 00:09:57.027 "uuid": "cf76cd6f-01b0-476a-9537-a9990cb2db84", 00:09:57.027 "assigned_rate_limits": { 00:09:57.027 "rw_ios_per_sec": 0, 00:09:57.028 "rw_mbytes_per_sec": 0, 00:09:57.028 "r_mbytes_per_sec": 0, 00:09:57.028 "w_mbytes_per_sec": 0 00:09:57.028 }, 00:09:57.028 "claimed": true, 00:09:57.028 "claim_type": "exclusive_write", 00:09:57.028 "zoned": false, 00:09:57.028 "supported_io_types": { 00:09:57.028 "read": true, 00:09:57.028 "write": true, 00:09:57.028 "unmap": true, 00:09:57.028 "flush": true, 00:09:57.028 "reset": true, 00:09:57.028 "nvme_admin": false, 00:09:57.028 "nvme_io": false, 00:09:57.028 "nvme_io_md": false, 00:09:57.028 "write_zeroes": true, 00:09:57.028 "zcopy": true, 00:09:57.028 "get_zone_info": false, 00:09:57.028 "zone_management": false, 00:09:57.028 "zone_append": false, 00:09:57.028 "compare": false, 00:09:57.028 "compare_and_write": false, 00:09:57.028 "abort": true, 00:09:57.028 "seek_hole": false, 00:09:57.028 "seek_data": false, 00:09:57.028 "copy": true, 00:09:57.028 "nvme_iov_md": false 00:09:57.028 }, 00:09:57.028 "memory_domains": [ 00:09:57.028 { 00:09:57.028 "dma_device_id": "system", 00:09:57.028 "dma_device_type": 1 00:09:57.028 }, 00:09:57.028 { 00:09:57.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.028 "dma_device_type": 2 00:09:57.028 } 00:09:57.028 ], 00:09:57.028 "driver_specific": {} 00:09:57.028 } 00:09:57.028 ] 00:09:57.028 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.028 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:57.028 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:57.028 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.028 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.028 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.028 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.028 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.028 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.028 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.028 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.028 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.028 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.028 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.028 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.028 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.028 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.028 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.028 "name": "Existed_Raid", 00:09:57.028 "uuid": "061e4a8e-cfdd-4680-88e7-29f93159ada5", 00:09:57.028 "strip_size_kb": 0, 00:09:57.028 "state": "configuring", 00:09:57.028 "raid_level": "raid1", 00:09:57.028 "superblock": true, 00:09:57.028 "num_base_bdevs": 3, 00:09:57.028 "num_base_bdevs_discovered": 2, 00:09:57.028 "num_base_bdevs_operational": 3, 00:09:57.028 "base_bdevs_list": [ 00:09:57.028 { 00:09:57.028 "name": "BaseBdev1", 00:09:57.028 "uuid": "cf76cd6f-01b0-476a-9537-a9990cb2db84", 00:09:57.028 "is_configured": true, 00:09:57.028 "data_offset": 2048, 00:09:57.028 "data_size": 63488 00:09:57.028 }, 00:09:57.028 { 00:09:57.028 "name": null, 00:09:57.028 "uuid": "ec57a3ea-0c60-4c77-ad3c-e8526bbdb7e1", 00:09:57.028 "is_configured": false, 00:09:57.028 "data_offset": 0, 00:09:57.028 "data_size": 63488 00:09:57.028 }, 00:09:57.028 { 00:09:57.029 "name": "BaseBdev3", 00:09:57.029 "uuid": "2d1bfcc6-1bf2-48f4-85e3-c68f64b44762", 00:09:57.029 "is_configured": true, 00:09:57.029 "data_offset": 2048, 00:09:57.029 "data_size": 63488 00:09:57.029 } 00:09:57.029 ] 00:09:57.029 }' 00:09:57.029 10:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.029 10:55:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.598 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.598 10:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.598 10:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.598 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:57.598 10:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.598 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:57.598 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:57.598 10:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.598 10:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.598 [2024-11-15 10:55:04.348609] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:57.598 10:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.598 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:57.598 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.598 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.598 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.598 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.598 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.598 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.598 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.598 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.598 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.598 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.598 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.598 10:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.598 10:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.598 10:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.598 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.598 "name": "Existed_Raid", 00:09:57.598 "uuid": "061e4a8e-cfdd-4680-88e7-29f93159ada5", 00:09:57.598 "strip_size_kb": 0, 00:09:57.598 "state": "configuring", 00:09:57.598 "raid_level": "raid1", 00:09:57.598 "superblock": true, 00:09:57.598 "num_base_bdevs": 3, 00:09:57.598 "num_base_bdevs_discovered": 1, 00:09:57.598 "num_base_bdevs_operational": 3, 00:09:57.598 "base_bdevs_list": [ 00:09:57.598 { 00:09:57.598 "name": "BaseBdev1", 00:09:57.598 "uuid": "cf76cd6f-01b0-476a-9537-a9990cb2db84", 00:09:57.598 "is_configured": true, 00:09:57.598 "data_offset": 2048, 00:09:57.598 "data_size": 63488 00:09:57.598 }, 00:09:57.598 { 00:09:57.598 "name": null, 00:09:57.598 "uuid": "ec57a3ea-0c60-4c77-ad3c-e8526bbdb7e1", 00:09:57.598 "is_configured": false, 00:09:57.598 "data_offset": 0, 00:09:57.598 "data_size": 63488 00:09:57.598 }, 00:09:57.598 { 00:09:57.598 "name": null, 00:09:57.598 "uuid": "2d1bfcc6-1bf2-48f4-85e3-c68f64b44762", 00:09:57.598 "is_configured": false, 00:09:57.598 "data_offset": 0, 00:09:57.598 "data_size": 63488 00:09:57.598 } 00:09:57.598 ] 00:09:57.598 }' 00:09:57.598 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.598 10:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.861 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.861 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:57.861 10:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.861 10:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.120 10:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.120 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:58.120 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:58.120 10:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.120 10:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.120 [2024-11-15 10:55:04.827964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:58.120 10:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.120 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:58.120 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.120 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.120 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.120 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.120 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.120 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.120 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.120 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.120 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.120 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.120 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.120 10:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.120 10:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.120 10:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.120 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.120 "name": "Existed_Raid", 00:09:58.120 "uuid": "061e4a8e-cfdd-4680-88e7-29f93159ada5", 00:09:58.121 "strip_size_kb": 0, 00:09:58.121 "state": "configuring", 00:09:58.121 "raid_level": "raid1", 00:09:58.121 "superblock": true, 00:09:58.121 "num_base_bdevs": 3, 00:09:58.121 "num_base_bdevs_discovered": 2, 00:09:58.121 "num_base_bdevs_operational": 3, 00:09:58.121 "base_bdevs_list": [ 00:09:58.121 { 00:09:58.121 "name": "BaseBdev1", 00:09:58.121 "uuid": "cf76cd6f-01b0-476a-9537-a9990cb2db84", 00:09:58.121 "is_configured": true, 00:09:58.121 "data_offset": 2048, 00:09:58.121 "data_size": 63488 00:09:58.121 }, 00:09:58.121 { 00:09:58.121 "name": null, 00:09:58.121 "uuid": "ec57a3ea-0c60-4c77-ad3c-e8526bbdb7e1", 00:09:58.121 "is_configured": false, 00:09:58.121 "data_offset": 0, 00:09:58.121 "data_size": 63488 00:09:58.121 }, 00:09:58.121 { 00:09:58.121 "name": "BaseBdev3", 00:09:58.121 "uuid": "2d1bfcc6-1bf2-48f4-85e3-c68f64b44762", 00:09:58.121 "is_configured": true, 00:09:58.121 "data_offset": 2048, 00:09:58.121 "data_size": 63488 00:09:58.121 } 00:09:58.121 ] 00:09:58.121 }' 00:09:58.121 10:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.121 10:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.391 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:58.391 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.391 10:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.391 10:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.660 10:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.660 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:58.660 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:58.660 10:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.660 10:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.660 [2024-11-15 10:55:05.351105] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:58.660 10:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.660 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:58.660 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.660 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.660 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.660 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.660 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.660 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.660 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.660 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.660 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.660 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.660 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.660 10:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.660 10:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.660 10:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.660 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.660 "name": "Existed_Raid", 00:09:58.660 "uuid": "061e4a8e-cfdd-4680-88e7-29f93159ada5", 00:09:58.660 "strip_size_kb": 0, 00:09:58.660 "state": "configuring", 00:09:58.660 "raid_level": "raid1", 00:09:58.660 "superblock": true, 00:09:58.660 "num_base_bdevs": 3, 00:09:58.660 "num_base_bdevs_discovered": 1, 00:09:58.660 "num_base_bdevs_operational": 3, 00:09:58.660 "base_bdevs_list": [ 00:09:58.660 { 00:09:58.660 "name": null, 00:09:58.660 "uuid": "cf76cd6f-01b0-476a-9537-a9990cb2db84", 00:09:58.660 "is_configured": false, 00:09:58.660 "data_offset": 0, 00:09:58.660 "data_size": 63488 00:09:58.660 }, 00:09:58.660 { 00:09:58.660 "name": null, 00:09:58.660 "uuid": "ec57a3ea-0c60-4c77-ad3c-e8526bbdb7e1", 00:09:58.660 "is_configured": false, 00:09:58.660 "data_offset": 0, 00:09:58.660 "data_size": 63488 00:09:58.660 }, 00:09:58.660 { 00:09:58.660 "name": "BaseBdev3", 00:09:58.660 "uuid": "2d1bfcc6-1bf2-48f4-85e3-c68f64b44762", 00:09:58.660 "is_configured": true, 00:09:58.660 "data_offset": 2048, 00:09:58.660 "data_size": 63488 00:09:58.660 } 00:09:58.660 ] 00:09:58.660 }' 00:09:58.660 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.660 10:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.229 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.229 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:59.229 10:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.229 10:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.229 10:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.229 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:59.229 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:59.229 10:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.229 10:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.229 [2024-11-15 10:55:05.901944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:59.229 10:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.229 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:59.229 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.229 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.229 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.229 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.229 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.229 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.229 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.229 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.229 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.229 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.229 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.229 10:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.229 10:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.229 10:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.229 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.229 "name": "Existed_Raid", 00:09:59.229 "uuid": "061e4a8e-cfdd-4680-88e7-29f93159ada5", 00:09:59.229 "strip_size_kb": 0, 00:09:59.229 "state": "configuring", 00:09:59.229 "raid_level": "raid1", 00:09:59.229 "superblock": true, 00:09:59.229 "num_base_bdevs": 3, 00:09:59.229 "num_base_bdevs_discovered": 2, 00:09:59.229 "num_base_bdevs_operational": 3, 00:09:59.229 "base_bdevs_list": [ 00:09:59.229 { 00:09:59.229 "name": null, 00:09:59.229 "uuid": "cf76cd6f-01b0-476a-9537-a9990cb2db84", 00:09:59.229 "is_configured": false, 00:09:59.229 "data_offset": 0, 00:09:59.229 "data_size": 63488 00:09:59.229 }, 00:09:59.229 { 00:09:59.229 "name": "BaseBdev2", 00:09:59.229 "uuid": "ec57a3ea-0c60-4c77-ad3c-e8526bbdb7e1", 00:09:59.229 "is_configured": true, 00:09:59.229 "data_offset": 2048, 00:09:59.229 "data_size": 63488 00:09:59.229 }, 00:09:59.229 { 00:09:59.229 "name": "BaseBdev3", 00:09:59.229 "uuid": "2d1bfcc6-1bf2-48f4-85e3-c68f64b44762", 00:09:59.229 "is_configured": true, 00:09:59.229 "data_offset": 2048, 00:09:59.229 "data_size": 63488 00:09:59.229 } 00:09:59.229 ] 00:09:59.229 }' 00:09:59.229 10:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.229 10:55:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.488 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.488 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:59.488 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.488 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.488 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.488 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:59.488 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.488 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:59.488 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.488 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.748 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.748 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cf76cd6f-01b0-476a-9537-a9990cb2db84 00:09:59.748 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.748 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.748 [2024-11-15 10:55:06.484861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:59.748 [2024-11-15 10:55:06.485227] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:59.748 [2024-11-15 10:55:06.485245] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:59.748 [2024-11-15 10:55:06.485534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:59.748 [2024-11-15 10:55:06.485696] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:59.748 [2024-11-15 10:55:06.485710] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:59.748 [2024-11-15 10:55:06.485846] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.748 NewBaseBdev 00:09:59.748 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.748 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:59.748 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:09:59.748 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:59.748 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:59.748 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:59.748 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:59.748 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:59.748 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.748 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.748 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.748 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:59.748 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.748 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.748 [ 00:09:59.748 { 00:09:59.748 "name": "NewBaseBdev", 00:09:59.748 "aliases": [ 00:09:59.748 "cf76cd6f-01b0-476a-9537-a9990cb2db84" 00:09:59.748 ], 00:09:59.748 "product_name": "Malloc disk", 00:09:59.748 "block_size": 512, 00:09:59.748 "num_blocks": 65536, 00:09:59.748 "uuid": "cf76cd6f-01b0-476a-9537-a9990cb2db84", 00:09:59.748 "assigned_rate_limits": { 00:09:59.748 "rw_ios_per_sec": 0, 00:09:59.748 "rw_mbytes_per_sec": 0, 00:09:59.748 "r_mbytes_per_sec": 0, 00:09:59.749 "w_mbytes_per_sec": 0 00:09:59.749 }, 00:09:59.749 "claimed": true, 00:09:59.749 "claim_type": "exclusive_write", 00:09:59.749 "zoned": false, 00:09:59.749 "supported_io_types": { 00:09:59.749 "read": true, 00:09:59.749 "write": true, 00:09:59.749 "unmap": true, 00:09:59.749 "flush": true, 00:09:59.749 "reset": true, 00:09:59.749 "nvme_admin": false, 00:09:59.749 "nvme_io": false, 00:09:59.749 "nvme_io_md": false, 00:09:59.749 "write_zeroes": true, 00:09:59.749 "zcopy": true, 00:09:59.749 "get_zone_info": false, 00:09:59.749 "zone_management": false, 00:09:59.749 "zone_append": false, 00:09:59.749 "compare": false, 00:09:59.749 "compare_and_write": false, 00:09:59.749 "abort": true, 00:09:59.749 "seek_hole": false, 00:09:59.749 "seek_data": false, 00:09:59.749 "copy": true, 00:09:59.749 "nvme_iov_md": false 00:09:59.749 }, 00:09:59.749 "memory_domains": [ 00:09:59.749 { 00:09:59.749 "dma_device_id": "system", 00:09:59.749 "dma_device_type": 1 00:09:59.749 }, 00:09:59.749 { 00:09:59.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.749 "dma_device_type": 2 00:09:59.749 } 00:09:59.749 ], 00:09:59.749 "driver_specific": {} 00:09:59.749 } 00:09:59.749 ] 00:09:59.749 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.749 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:59.749 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:59.749 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.749 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.749 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.749 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.749 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.749 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.749 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.749 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.749 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.749 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.749 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.749 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.749 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.749 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.749 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.749 "name": "Existed_Raid", 00:09:59.749 "uuid": "061e4a8e-cfdd-4680-88e7-29f93159ada5", 00:09:59.749 "strip_size_kb": 0, 00:09:59.749 "state": "online", 00:09:59.749 "raid_level": "raid1", 00:09:59.749 "superblock": true, 00:09:59.749 "num_base_bdevs": 3, 00:09:59.749 "num_base_bdevs_discovered": 3, 00:09:59.749 "num_base_bdevs_operational": 3, 00:09:59.749 "base_bdevs_list": [ 00:09:59.749 { 00:09:59.749 "name": "NewBaseBdev", 00:09:59.749 "uuid": "cf76cd6f-01b0-476a-9537-a9990cb2db84", 00:09:59.749 "is_configured": true, 00:09:59.749 "data_offset": 2048, 00:09:59.749 "data_size": 63488 00:09:59.749 }, 00:09:59.749 { 00:09:59.749 "name": "BaseBdev2", 00:09:59.749 "uuid": "ec57a3ea-0c60-4c77-ad3c-e8526bbdb7e1", 00:09:59.749 "is_configured": true, 00:09:59.749 "data_offset": 2048, 00:09:59.749 "data_size": 63488 00:09:59.749 }, 00:09:59.749 { 00:09:59.749 "name": "BaseBdev3", 00:09:59.749 "uuid": "2d1bfcc6-1bf2-48f4-85e3-c68f64b44762", 00:09:59.749 "is_configured": true, 00:09:59.749 "data_offset": 2048, 00:09:59.749 "data_size": 63488 00:09:59.749 } 00:09:59.749 ] 00:09:59.749 }' 00:09:59.749 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.749 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.009 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:00.009 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:00.009 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:00.009 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:00.009 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:00.009 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:00.009 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:00.009 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:00.009 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.009 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.009 [2024-11-15 10:55:06.876588] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:00.009 10:55:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.009 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:00.009 "name": "Existed_Raid", 00:10:00.009 "aliases": [ 00:10:00.009 "061e4a8e-cfdd-4680-88e7-29f93159ada5" 00:10:00.009 ], 00:10:00.009 "product_name": "Raid Volume", 00:10:00.009 "block_size": 512, 00:10:00.009 "num_blocks": 63488, 00:10:00.009 "uuid": "061e4a8e-cfdd-4680-88e7-29f93159ada5", 00:10:00.009 "assigned_rate_limits": { 00:10:00.009 "rw_ios_per_sec": 0, 00:10:00.009 "rw_mbytes_per_sec": 0, 00:10:00.009 "r_mbytes_per_sec": 0, 00:10:00.009 "w_mbytes_per_sec": 0 00:10:00.009 }, 00:10:00.009 "claimed": false, 00:10:00.009 "zoned": false, 00:10:00.009 "supported_io_types": { 00:10:00.009 "read": true, 00:10:00.009 "write": true, 00:10:00.009 "unmap": false, 00:10:00.009 "flush": false, 00:10:00.009 "reset": true, 00:10:00.009 "nvme_admin": false, 00:10:00.009 "nvme_io": false, 00:10:00.009 "nvme_io_md": false, 00:10:00.009 "write_zeroes": true, 00:10:00.009 "zcopy": false, 00:10:00.009 "get_zone_info": false, 00:10:00.009 "zone_management": false, 00:10:00.009 "zone_append": false, 00:10:00.009 "compare": false, 00:10:00.009 "compare_and_write": false, 00:10:00.009 "abort": false, 00:10:00.009 "seek_hole": false, 00:10:00.009 "seek_data": false, 00:10:00.009 "copy": false, 00:10:00.009 "nvme_iov_md": false 00:10:00.009 }, 00:10:00.009 "memory_domains": [ 00:10:00.009 { 00:10:00.009 "dma_device_id": "system", 00:10:00.009 "dma_device_type": 1 00:10:00.009 }, 00:10:00.009 { 00:10:00.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.009 "dma_device_type": 2 00:10:00.009 }, 00:10:00.009 { 00:10:00.009 "dma_device_id": "system", 00:10:00.009 "dma_device_type": 1 00:10:00.009 }, 00:10:00.009 { 00:10:00.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.009 "dma_device_type": 2 00:10:00.009 }, 00:10:00.009 { 00:10:00.009 "dma_device_id": "system", 00:10:00.009 "dma_device_type": 1 00:10:00.009 }, 00:10:00.009 { 00:10:00.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.009 "dma_device_type": 2 00:10:00.009 } 00:10:00.009 ], 00:10:00.009 "driver_specific": { 00:10:00.009 "raid": { 00:10:00.009 "uuid": "061e4a8e-cfdd-4680-88e7-29f93159ada5", 00:10:00.009 "strip_size_kb": 0, 00:10:00.009 "state": "online", 00:10:00.009 "raid_level": "raid1", 00:10:00.009 "superblock": true, 00:10:00.009 "num_base_bdevs": 3, 00:10:00.009 "num_base_bdevs_discovered": 3, 00:10:00.009 "num_base_bdevs_operational": 3, 00:10:00.009 "base_bdevs_list": [ 00:10:00.009 { 00:10:00.009 "name": "NewBaseBdev", 00:10:00.009 "uuid": "cf76cd6f-01b0-476a-9537-a9990cb2db84", 00:10:00.009 "is_configured": true, 00:10:00.009 "data_offset": 2048, 00:10:00.009 "data_size": 63488 00:10:00.009 }, 00:10:00.009 { 00:10:00.009 "name": "BaseBdev2", 00:10:00.009 "uuid": "ec57a3ea-0c60-4c77-ad3c-e8526bbdb7e1", 00:10:00.009 "is_configured": true, 00:10:00.009 "data_offset": 2048, 00:10:00.009 "data_size": 63488 00:10:00.010 }, 00:10:00.010 { 00:10:00.010 "name": "BaseBdev3", 00:10:00.010 "uuid": "2d1bfcc6-1bf2-48f4-85e3-c68f64b44762", 00:10:00.010 "is_configured": true, 00:10:00.010 "data_offset": 2048, 00:10:00.010 "data_size": 63488 00:10:00.010 } 00:10:00.010 ] 00:10:00.010 } 00:10:00.010 } 00:10:00.010 }' 00:10:00.010 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:00.270 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:00.270 BaseBdev2 00:10:00.270 BaseBdev3' 00:10:00.270 10:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.270 [2024-11-15 10:55:07.147782] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:00.270 [2024-11-15 10:55:07.147814] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:00.270 [2024-11-15 10:55:07.147918] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:00.270 [2024-11-15 10:55:07.148296] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:00.270 [2024-11-15 10:55:07.148344] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68170 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 68170 ']' 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 68170 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68170 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:00.270 killing process with pid 68170 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68170' 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 68170 00:10:00.270 [2024-11-15 10:55:07.194015] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:00.270 10:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 68170 00:10:00.838 [2024-11-15 10:55:07.499014] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:01.776 10:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:01.776 00:10:01.776 real 0m10.643s 00:10:01.776 user 0m16.948s 00:10:01.776 sys 0m1.866s 00:10:01.776 10:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:01.776 ************************************ 00:10:01.776 END TEST raid_state_function_test_sb 00:10:01.776 ************************************ 00:10:01.776 10:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.776 10:55:08 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:01.776 10:55:08 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:01.776 10:55:08 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:01.776 10:55:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:01.776 ************************************ 00:10:01.776 START TEST raid_superblock_test 00:10:01.776 ************************************ 00:10:01.776 10:55:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 3 00:10:01.776 10:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:01.776 10:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:01.776 10:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:01.776 10:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:01.776 10:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:01.776 10:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:01.776 10:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:01.776 10:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:01.776 10:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:01.776 10:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:01.776 10:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:01.776 10:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:01.776 10:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:01.776 10:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:01.776 10:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:01.776 10:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68790 00:10:01.776 10:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:01.776 10:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68790 00:10:01.776 10:55:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 68790 ']' 00:10:01.776 10:55:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.776 10:55:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:01.776 10:55:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.776 10:55:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:01.776 10:55:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.035 [2024-11-15 10:55:08.769677] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:10:02.035 [2024-11-15 10:55:08.769884] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68790 ] 00:10:02.035 [2024-11-15 10:55:08.955944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.294 [2024-11-15 10:55:09.077400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.553 [2024-11-15 10:55:09.278989] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.553 [2024-11-15 10:55:09.279145] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.813 10:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:02.813 10:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:10:02.813 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:02.813 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:02.813 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:02.813 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:02.814 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:02.814 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:02.814 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:02.814 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:02.814 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:02.814 10:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.814 10:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.814 malloc1 00:10:02.814 10:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.814 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:02.814 10:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.814 10:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.814 [2024-11-15 10:55:09.678483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:02.814 [2024-11-15 10:55:09.678603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.814 [2024-11-15 10:55:09.678648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:02.814 [2024-11-15 10:55:09.678679] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.814 [2024-11-15 10:55:09.680906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.814 [2024-11-15 10:55:09.680978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:02.814 pt1 00:10:02.814 10:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.814 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:02.814 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:02.814 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:02.814 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:02.814 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:02.814 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:02.814 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:02.814 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:02.814 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:02.814 10:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.814 10:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.814 malloc2 00:10:02.814 10:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.814 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:02.814 10:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.814 10:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.814 [2024-11-15 10:55:09.737450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:02.814 [2024-11-15 10:55:09.737555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.814 [2024-11-15 10:55:09.737596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:02.814 [2024-11-15 10:55:09.737622] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:03.074 [2024-11-15 10:55:09.739739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:03.074 [2024-11-15 10:55:09.739815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:03.074 pt2 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.074 malloc3 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.074 [2024-11-15 10:55:09.808366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:03.074 [2024-11-15 10:55:09.808479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:03.074 [2024-11-15 10:55:09.808521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:03.074 [2024-11-15 10:55:09.808551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:03.074 [2024-11-15 10:55:09.810672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:03.074 [2024-11-15 10:55:09.810771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:03.074 pt3 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.074 [2024-11-15 10:55:09.820428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:03.074 [2024-11-15 10:55:09.822327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:03.074 [2024-11-15 10:55:09.822453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:03.074 [2024-11-15 10:55:09.822647] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:03.074 [2024-11-15 10:55:09.822696] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:03.074 [2024-11-15 10:55:09.822988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:03.074 [2024-11-15 10:55:09.823209] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:03.074 [2024-11-15 10:55:09.823256] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:03.074 [2024-11-15 10:55:09.823484] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.074 "name": "raid_bdev1", 00:10:03.074 "uuid": "2041319e-d21f-4166-9036-6c248daff2c0", 00:10:03.074 "strip_size_kb": 0, 00:10:03.074 "state": "online", 00:10:03.074 "raid_level": "raid1", 00:10:03.074 "superblock": true, 00:10:03.074 "num_base_bdevs": 3, 00:10:03.074 "num_base_bdevs_discovered": 3, 00:10:03.074 "num_base_bdevs_operational": 3, 00:10:03.074 "base_bdevs_list": [ 00:10:03.074 { 00:10:03.074 "name": "pt1", 00:10:03.074 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:03.074 "is_configured": true, 00:10:03.074 "data_offset": 2048, 00:10:03.074 "data_size": 63488 00:10:03.074 }, 00:10:03.074 { 00:10:03.074 "name": "pt2", 00:10:03.074 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:03.074 "is_configured": true, 00:10:03.074 "data_offset": 2048, 00:10:03.074 "data_size": 63488 00:10:03.074 }, 00:10:03.074 { 00:10:03.074 "name": "pt3", 00:10:03.074 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:03.074 "is_configured": true, 00:10:03.074 "data_offset": 2048, 00:10:03.074 "data_size": 63488 00:10:03.074 } 00:10:03.074 ] 00:10:03.074 }' 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.074 10:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.334 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:03.334 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:03.334 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:03.334 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:03.334 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:03.334 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:03.334 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:03.334 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:03.334 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.334 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.334 [2024-11-15 10:55:10.232209] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:03.334 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.334 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:03.334 "name": "raid_bdev1", 00:10:03.334 "aliases": [ 00:10:03.334 "2041319e-d21f-4166-9036-6c248daff2c0" 00:10:03.334 ], 00:10:03.334 "product_name": "Raid Volume", 00:10:03.334 "block_size": 512, 00:10:03.334 "num_blocks": 63488, 00:10:03.334 "uuid": "2041319e-d21f-4166-9036-6c248daff2c0", 00:10:03.334 "assigned_rate_limits": { 00:10:03.334 "rw_ios_per_sec": 0, 00:10:03.334 "rw_mbytes_per_sec": 0, 00:10:03.334 "r_mbytes_per_sec": 0, 00:10:03.334 "w_mbytes_per_sec": 0 00:10:03.334 }, 00:10:03.334 "claimed": false, 00:10:03.334 "zoned": false, 00:10:03.334 "supported_io_types": { 00:10:03.334 "read": true, 00:10:03.334 "write": true, 00:10:03.334 "unmap": false, 00:10:03.334 "flush": false, 00:10:03.334 "reset": true, 00:10:03.334 "nvme_admin": false, 00:10:03.334 "nvme_io": false, 00:10:03.334 "nvme_io_md": false, 00:10:03.334 "write_zeroes": true, 00:10:03.334 "zcopy": false, 00:10:03.334 "get_zone_info": false, 00:10:03.334 "zone_management": false, 00:10:03.334 "zone_append": false, 00:10:03.334 "compare": false, 00:10:03.334 "compare_and_write": false, 00:10:03.334 "abort": false, 00:10:03.334 "seek_hole": false, 00:10:03.334 "seek_data": false, 00:10:03.334 "copy": false, 00:10:03.334 "nvme_iov_md": false 00:10:03.334 }, 00:10:03.334 "memory_domains": [ 00:10:03.334 { 00:10:03.334 "dma_device_id": "system", 00:10:03.334 "dma_device_type": 1 00:10:03.334 }, 00:10:03.334 { 00:10:03.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.334 "dma_device_type": 2 00:10:03.334 }, 00:10:03.334 { 00:10:03.334 "dma_device_id": "system", 00:10:03.334 "dma_device_type": 1 00:10:03.334 }, 00:10:03.334 { 00:10:03.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.334 "dma_device_type": 2 00:10:03.334 }, 00:10:03.334 { 00:10:03.334 "dma_device_id": "system", 00:10:03.334 "dma_device_type": 1 00:10:03.334 }, 00:10:03.334 { 00:10:03.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.334 "dma_device_type": 2 00:10:03.334 } 00:10:03.334 ], 00:10:03.334 "driver_specific": { 00:10:03.334 "raid": { 00:10:03.334 "uuid": "2041319e-d21f-4166-9036-6c248daff2c0", 00:10:03.334 "strip_size_kb": 0, 00:10:03.334 "state": "online", 00:10:03.334 "raid_level": "raid1", 00:10:03.334 "superblock": true, 00:10:03.334 "num_base_bdevs": 3, 00:10:03.334 "num_base_bdevs_discovered": 3, 00:10:03.334 "num_base_bdevs_operational": 3, 00:10:03.334 "base_bdevs_list": [ 00:10:03.334 { 00:10:03.334 "name": "pt1", 00:10:03.334 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:03.334 "is_configured": true, 00:10:03.334 "data_offset": 2048, 00:10:03.334 "data_size": 63488 00:10:03.334 }, 00:10:03.334 { 00:10:03.334 "name": "pt2", 00:10:03.334 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:03.334 "is_configured": true, 00:10:03.334 "data_offset": 2048, 00:10:03.334 "data_size": 63488 00:10:03.334 }, 00:10:03.334 { 00:10:03.334 "name": "pt3", 00:10:03.334 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:03.334 "is_configured": true, 00:10:03.334 "data_offset": 2048, 00:10:03.334 "data_size": 63488 00:10:03.334 } 00:10:03.334 ] 00:10:03.334 } 00:10:03.334 } 00:10:03.334 }' 00:10:03.334 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:03.594 pt2 00:10:03.594 pt3' 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.594 [2024-11-15 10:55:10.491792] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:03.594 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2041319e-d21f-4166-9036-6c248daff2c0 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2041319e-d21f-4166-9036-6c248daff2c0 ']' 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.854 [2024-11-15 10:55:10.539418] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:03.854 [2024-11-15 10:55:10.539521] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:03.854 [2024-11-15 10:55:10.539619] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:03.854 [2024-11-15 10:55:10.539710] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:03.854 [2024-11-15 10:55:10.539720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.854 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.854 [2024-11-15 10:55:10.687199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:03.854 [2024-11-15 10:55:10.689263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:03.854 [2024-11-15 10:55:10.689384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:03.854 [2024-11-15 10:55:10.689442] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:03.854 [2024-11-15 10:55:10.689515] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:03.854 [2024-11-15 10:55:10.689536] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:03.854 [2024-11-15 10:55:10.689554] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:03.854 [2024-11-15 10:55:10.689564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:03.854 request: 00:10:03.854 { 00:10:03.854 "name": "raid_bdev1", 00:10:03.854 "raid_level": "raid1", 00:10:03.854 "base_bdevs": [ 00:10:03.854 "malloc1", 00:10:03.855 "malloc2", 00:10:03.855 "malloc3" 00:10:03.855 ], 00:10:03.855 "superblock": false, 00:10:03.855 "method": "bdev_raid_create", 00:10:03.855 "req_id": 1 00:10:03.855 } 00:10:03.855 Got JSON-RPC error response 00:10:03.855 response: 00:10:03.855 { 00:10:03.855 "code": -17, 00:10:03.855 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:03.855 } 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.855 [2024-11-15 10:55:10.755017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:03.855 [2024-11-15 10:55:10.755134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:03.855 [2024-11-15 10:55:10.755178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:03.855 [2024-11-15 10:55:10.755212] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:03.855 [2024-11-15 10:55:10.757491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:03.855 [2024-11-15 10:55:10.757562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:03.855 [2024-11-15 10:55:10.757675] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:03.855 [2024-11-15 10:55:10.757752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:03.855 pt1 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.855 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.114 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.114 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.114 "name": "raid_bdev1", 00:10:04.114 "uuid": "2041319e-d21f-4166-9036-6c248daff2c0", 00:10:04.114 "strip_size_kb": 0, 00:10:04.114 "state": "configuring", 00:10:04.114 "raid_level": "raid1", 00:10:04.114 "superblock": true, 00:10:04.114 "num_base_bdevs": 3, 00:10:04.114 "num_base_bdevs_discovered": 1, 00:10:04.114 "num_base_bdevs_operational": 3, 00:10:04.114 "base_bdevs_list": [ 00:10:04.114 { 00:10:04.114 "name": "pt1", 00:10:04.114 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:04.114 "is_configured": true, 00:10:04.114 "data_offset": 2048, 00:10:04.114 "data_size": 63488 00:10:04.114 }, 00:10:04.114 { 00:10:04.114 "name": null, 00:10:04.114 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:04.114 "is_configured": false, 00:10:04.114 "data_offset": 2048, 00:10:04.114 "data_size": 63488 00:10:04.114 }, 00:10:04.114 { 00:10:04.114 "name": null, 00:10:04.114 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:04.114 "is_configured": false, 00:10:04.114 "data_offset": 2048, 00:10:04.114 "data_size": 63488 00:10:04.114 } 00:10:04.114 ] 00:10:04.114 }' 00:10:04.114 10:55:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.114 10:55:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.373 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:04.373 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:04.373 10:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.373 10:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.373 [2024-11-15 10:55:11.234219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:04.373 [2024-11-15 10:55:11.234379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.373 [2024-11-15 10:55:11.234421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:04.373 [2024-11-15 10:55:11.234463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.373 [2024-11-15 10:55:11.234935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.373 [2024-11-15 10:55:11.234990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:04.373 [2024-11-15 10:55:11.235088] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:04.374 [2024-11-15 10:55:11.235110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:04.374 pt2 00:10:04.374 10:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.374 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:04.374 10:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.374 10:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.374 [2024-11-15 10:55:11.246183] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:04.374 10:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.374 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:04.374 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:04.374 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.374 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.374 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.374 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.374 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.374 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.374 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.374 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.374 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.374 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.374 10:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.374 10:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.374 10:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.632 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.632 "name": "raid_bdev1", 00:10:04.632 "uuid": "2041319e-d21f-4166-9036-6c248daff2c0", 00:10:04.632 "strip_size_kb": 0, 00:10:04.632 "state": "configuring", 00:10:04.632 "raid_level": "raid1", 00:10:04.632 "superblock": true, 00:10:04.633 "num_base_bdevs": 3, 00:10:04.633 "num_base_bdevs_discovered": 1, 00:10:04.633 "num_base_bdevs_operational": 3, 00:10:04.633 "base_bdevs_list": [ 00:10:04.633 { 00:10:04.633 "name": "pt1", 00:10:04.633 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:04.633 "is_configured": true, 00:10:04.633 "data_offset": 2048, 00:10:04.633 "data_size": 63488 00:10:04.633 }, 00:10:04.633 { 00:10:04.633 "name": null, 00:10:04.633 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:04.633 "is_configured": false, 00:10:04.633 "data_offset": 0, 00:10:04.633 "data_size": 63488 00:10:04.633 }, 00:10:04.633 { 00:10:04.633 "name": null, 00:10:04.633 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:04.633 "is_configured": false, 00:10:04.633 "data_offset": 2048, 00:10:04.633 "data_size": 63488 00:10:04.633 } 00:10:04.633 ] 00:10:04.633 }' 00:10:04.633 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.633 10:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.891 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:04.891 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:04.891 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:04.891 10:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.891 10:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.891 [2024-11-15 10:55:11.733366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:04.891 [2024-11-15 10:55:11.733495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.891 [2024-11-15 10:55:11.733534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:04.891 [2024-11-15 10:55:11.733567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.891 [2024-11-15 10:55:11.734087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.891 [2024-11-15 10:55:11.734169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:04.891 [2024-11-15 10:55:11.734289] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:04.891 [2024-11-15 10:55:11.734388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:04.891 pt2 00:10:04.891 10:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.891 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:04.891 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:04.891 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:04.891 10:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.891 10:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.891 [2024-11-15 10:55:11.745338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:04.891 [2024-11-15 10:55:11.745441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.891 [2024-11-15 10:55:11.745501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:04.891 [2024-11-15 10:55:11.745548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.891 [2024-11-15 10:55:11.746044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.891 [2024-11-15 10:55:11.746116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:04.891 [2024-11-15 10:55:11.746231] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:04.891 [2024-11-15 10:55:11.746287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:04.891 [2024-11-15 10:55:11.746474] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:04.891 [2024-11-15 10:55:11.746522] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:04.891 [2024-11-15 10:55:11.746805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:04.891 [2024-11-15 10:55:11.747023] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:04.891 [2024-11-15 10:55:11.747068] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:04.892 [2024-11-15 10:55:11.747255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.892 pt3 00:10:04.892 10:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.892 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:04.892 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:04.892 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:04.892 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:04.892 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.892 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.892 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.892 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.892 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.892 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.892 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.892 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.892 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.892 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.892 10:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.892 10:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.892 10:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.892 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.892 "name": "raid_bdev1", 00:10:04.892 "uuid": "2041319e-d21f-4166-9036-6c248daff2c0", 00:10:04.892 "strip_size_kb": 0, 00:10:04.892 "state": "online", 00:10:04.892 "raid_level": "raid1", 00:10:04.892 "superblock": true, 00:10:04.892 "num_base_bdevs": 3, 00:10:04.892 "num_base_bdevs_discovered": 3, 00:10:04.892 "num_base_bdevs_operational": 3, 00:10:04.892 "base_bdevs_list": [ 00:10:04.892 { 00:10:04.892 "name": "pt1", 00:10:04.892 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:04.892 "is_configured": true, 00:10:04.892 "data_offset": 2048, 00:10:04.892 "data_size": 63488 00:10:04.892 }, 00:10:04.892 { 00:10:04.892 "name": "pt2", 00:10:04.892 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:04.892 "is_configured": true, 00:10:04.892 "data_offset": 2048, 00:10:04.892 "data_size": 63488 00:10:04.892 }, 00:10:04.892 { 00:10:04.892 "name": "pt3", 00:10:04.892 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:04.892 "is_configured": true, 00:10:04.892 "data_offset": 2048, 00:10:04.892 "data_size": 63488 00:10:04.892 } 00:10:04.892 ] 00:10:04.892 }' 00:10:04.892 10:55:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.892 10:55:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.510 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:05.510 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:05.510 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:05.510 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:05.510 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:05.510 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:05.510 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:05.510 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:05.510 10:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.510 10:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.510 [2024-11-15 10:55:12.236848] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.510 10:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.510 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:05.510 "name": "raid_bdev1", 00:10:05.510 "aliases": [ 00:10:05.510 "2041319e-d21f-4166-9036-6c248daff2c0" 00:10:05.510 ], 00:10:05.510 "product_name": "Raid Volume", 00:10:05.510 "block_size": 512, 00:10:05.510 "num_blocks": 63488, 00:10:05.510 "uuid": "2041319e-d21f-4166-9036-6c248daff2c0", 00:10:05.510 "assigned_rate_limits": { 00:10:05.510 "rw_ios_per_sec": 0, 00:10:05.510 "rw_mbytes_per_sec": 0, 00:10:05.510 "r_mbytes_per_sec": 0, 00:10:05.510 "w_mbytes_per_sec": 0 00:10:05.510 }, 00:10:05.510 "claimed": false, 00:10:05.510 "zoned": false, 00:10:05.510 "supported_io_types": { 00:10:05.510 "read": true, 00:10:05.510 "write": true, 00:10:05.510 "unmap": false, 00:10:05.510 "flush": false, 00:10:05.510 "reset": true, 00:10:05.510 "nvme_admin": false, 00:10:05.510 "nvme_io": false, 00:10:05.510 "nvme_io_md": false, 00:10:05.510 "write_zeroes": true, 00:10:05.510 "zcopy": false, 00:10:05.510 "get_zone_info": false, 00:10:05.510 "zone_management": false, 00:10:05.510 "zone_append": false, 00:10:05.510 "compare": false, 00:10:05.510 "compare_and_write": false, 00:10:05.510 "abort": false, 00:10:05.510 "seek_hole": false, 00:10:05.510 "seek_data": false, 00:10:05.510 "copy": false, 00:10:05.510 "nvme_iov_md": false 00:10:05.510 }, 00:10:05.510 "memory_domains": [ 00:10:05.510 { 00:10:05.510 "dma_device_id": "system", 00:10:05.510 "dma_device_type": 1 00:10:05.510 }, 00:10:05.510 { 00:10:05.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.510 "dma_device_type": 2 00:10:05.510 }, 00:10:05.510 { 00:10:05.510 "dma_device_id": "system", 00:10:05.510 "dma_device_type": 1 00:10:05.510 }, 00:10:05.510 { 00:10:05.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.510 "dma_device_type": 2 00:10:05.510 }, 00:10:05.510 { 00:10:05.510 "dma_device_id": "system", 00:10:05.510 "dma_device_type": 1 00:10:05.510 }, 00:10:05.510 { 00:10:05.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.510 "dma_device_type": 2 00:10:05.510 } 00:10:05.510 ], 00:10:05.510 "driver_specific": { 00:10:05.510 "raid": { 00:10:05.510 "uuid": "2041319e-d21f-4166-9036-6c248daff2c0", 00:10:05.510 "strip_size_kb": 0, 00:10:05.510 "state": "online", 00:10:05.510 "raid_level": "raid1", 00:10:05.510 "superblock": true, 00:10:05.510 "num_base_bdevs": 3, 00:10:05.510 "num_base_bdevs_discovered": 3, 00:10:05.510 "num_base_bdevs_operational": 3, 00:10:05.510 "base_bdevs_list": [ 00:10:05.510 { 00:10:05.510 "name": "pt1", 00:10:05.510 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:05.510 "is_configured": true, 00:10:05.510 "data_offset": 2048, 00:10:05.510 "data_size": 63488 00:10:05.510 }, 00:10:05.510 { 00:10:05.510 "name": "pt2", 00:10:05.510 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:05.510 "is_configured": true, 00:10:05.510 "data_offset": 2048, 00:10:05.510 "data_size": 63488 00:10:05.510 }, 00:10:05.510 { 00:10:05.510 "name": "pt3", 00:10:05.510 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:05.510 "is_configured": true, 00:10:05.510 "data_offset": 2048, 00:10:05.510 "data_size": 63488 00:10:05.510 } 00:10:05.510 ] 00:10:05.510 } 00:10:05.510 } 00:10:05.510 }' 00:10:05.510 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:05.510 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:05.510 pt2 00:10:05.510 pt3' 00:10:05.510 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.510 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:05.511 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.511 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:05.511 10:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.511 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.511 10:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.511 10:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.511 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.511 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.511 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.511 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:05.511 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.511 10:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.511 10:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.511 10:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.771 [2024-11-15 10:55:12.516386] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2041319e-d21f-4166-9036-6c248daff2c0 '!=' 2041319e-d21f-4166-9036-6c248daff2c0 ']' 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.771 [2024-11-15 10:55:12.564040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.771 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.771 "name": "raid_bdev1", 00:10:05.771 "uuid": "2041319e-d21f-4166-9036-6c248daff2c0", 00:10:05.771 "strip_size_kb": 0, 00:10:05.771 "state": "online", 00:10:05.771 "raid_level": "raid1", 00:10:05.771 "superblock": true, 00:10:05.771 "num_base_bdevs": 3, 00:10:05.771 "num_base_bdevs_discovered": 2, 00:10:05.771 "num_base_bdevs_operational": 2, 00:10:05.771 "base_bdevs_list": [ 00:10:05.771 { 00:10:05.771 "name": null, 00:10:05.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.771 "is_configured": false, 00:10:05.771 "data_offset": 0, 00:10:05.771 "data_size": 63488 00:10:05.771 }, 00:10:05.771 { 00:10:05.771 "name": "pt2", 00:10:05.771 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:05.771 "is_configured": true, 00:10:05.771 "data_offset": 2048, 00:10:05.771 "data_size": 63488 00:10:05.771 }, 00:10:05.771 { 00:10:05.771 "name": "pt3", 00:10:05.771 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:05.771 "is_configured": true, 00:10:05.771 "data_offset": 2048, 00:10:05.771 "data_size": 63488 00:10:05.771 } 00:10:05.772 ] 00:10:05.772 }' 00:10:05.772 10:55:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.772 10:55:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.342 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:06.342 10:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.342 10:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.342 [2024-11-15 10:55:13.027162] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:06.342 [2024-11-15 10:55:13.027240] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:06.343 [2024-11-15 10:55:13.027353] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.343 [2024-11-15 10:55:13.027447] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:06.343 [2024-11-15 10:55:13.027535] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.343 [2024-11-15 10:55:13.114987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:06.343 [2024-11-15 10:55:13.115117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.343 [2024-11-15 10:55:13.115152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:06.343 [2024-11-15 10:55:13.115184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.343 [2024-11-15 10:55:13.117504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.343 [2024-11-15 10:55:13.117620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:06.343 [2024-11-15 10:55:13.117743] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:06.343 [2024-11-15 10:55:13.117822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:06.343 pt2 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.343 "name": "raid_bdev1", 00:10:06.343 "uuid": "2041319e-d21f-4166-9036-6c248daff2c0", 00:10:06.343 "strip_size_kb": 0, 00:10:06.343 "state": "configuring", 00:10:06.343 "raid_level": "raid1", 00:10:06.343 "superblock": true, 00:10:06.343 "num_base_bdevs": 3, 00:10:06.343 "num_base_bdevs_discovered": 1, 00:10:06.343 "num_base_bdevs_operational": 2, 00:10:06.343 "base_bdevs_list": [ 00:10:06.343 { 00:10:06.343 "name": null, 00:10:06.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.343 "is_configured": false, 00:10:06.343 "data_offset": 2048, 00:10:06.343 "data_size": 63488 00:10:06.343 }, 00:10:06.343 { 00:10:06.343 "name": "pt2", 00:10:06.343 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:06.343 "is_configured": true, 00:10:06.343 "data_offset": 2048, 00:10:06.343 "data_size": 63488 00:10:06.343 }, 00:10:06.343 { 00:10:06.343 "name": null, 00:10:06.343 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:06.343 "is_configured": false, 00:10:06.343 "data_offset": 2048, 00:10:06.343 "data_size": 63488 00:10:06.343 } 00:10:06.343 ] 00:10:06.343 }' 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.343 10:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.913 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:06.913 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:06.913 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:06.913 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:06.913 10:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.913 10:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.913 [2024-11-15 10:55:13.582214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:06.913 [2024-11-15 10:55:13.582351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.913 [2024-11-15 10:55:13.582425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:06.913 [2024-11-15 10:55:13.582468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.913 [2024-11-15 10:55:13.583003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.913 [2024-11-15 10:55:13.583073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:06.913 [2024-11-15 10:55:13.583215] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:06.913 [2024-11-15 10:55:13.583280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:06.913 [2024-11-15 10:55:13.583478] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:06.913 [2024-11-15 10:55:13.583527] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:06.913 [2024-11-15 10:55:13.583841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:06.913 [2024-11-15 10:55:13.584070] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:06.913 [2024-11-15 10:55:13.584116] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:06.913 [2024-11-15 10:55:13.584338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.913 pt3 00:10:06.913 10:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.913 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:06.913 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:06.913 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.913 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.913 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.914 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:06.914 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.914 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.914 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.914 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.914 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.914 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:06.914 10:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.914 10:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.914 10:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.914 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.914 "name": "raid_bdev1", 00:10:06.914 "uuid": "2041319e-d21f-4166-9036-6c248daff2c0", 00:10:06.914 "strip_size_kb": 0, 00:10:06.914 "state": "online", 00:10:06.914 "raid_level": "raid1", 00:10:06.914 "superblock": true, 00:10:06.914 "num_base_bdevs": 3, 00:10:06.914 "num_base_bdevs_discovered": 2, 00:10:06.914 "num_base_bdevs_operational": 2, 00:10:06.914 "base_bdevs_list": [ 00:10:06.914 { 00:10:06.914 "name": null, 00:10:06.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.914 "is_configured": false, 00:10:06.914 "data_offset": 2048, 00:10:06.914 "data_size": 63488 00:10:06.914 }, 00:10:06.914 { 00:10:06.914 "name": "pt2", 00:10:06.914 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:06.914 "is_configured": true, 00:10:06.914 "data_offset": 2048, 00:10:06.914 "data_size": 63488 00:10:06.914 }, 00:10:06.914 { 00:10:06.914 "name": "pt3", 00:10:06.914 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:06.914 "is_configured": true, 00:10:06.914 "data_offset": 2048, 00:10:06.914 "data_size": 63488 00:10:06.914 } 00:10:06.914 ] 00:10:06.914 }' 00:10:06.914 10:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.914 10:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.173 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:07.173 10:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.173 10:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.173 [2024-11-15 10:55:14.033430] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:07.173 [2024-11-15 10:55:14.033463] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.173 [2024-11-15 10:55:14.033553] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.173 [2024-11-15 10:55:14.033620] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:07.173 [2024-11-15 10:55:14.033631] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:07.173 10:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.173 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.173 10:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.173 10:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.173 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:07.173 10:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.173 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:07.173 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:07.173 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:07.173 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:07.173 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:07.173 10:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.173 10:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.433 10:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.433 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:07.433 10:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.433 10:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.433 [2024-11-15 10:55:14.109364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:07.433 [2024-11-15 10:55:14.109488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.433 [2024-11-15 10:55:14.109532] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:07.433 [2024-11-15 10:55:14.109567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.433 [2024-11-15 10:55:14.111924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.433 [2024-11-15 10:55:14.112005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:07.433 [2024-11-15 10:55:14.112128] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:07.433 [2024-11-15 10:55:14.112195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:07.433 [2024-11-15 10:55:14.112390] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:07.433 [2024-11-15 10:55:14.112447] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:07.433 [2024-11-15 10:55:14.112495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:07.433 [2024-11-15 10:55:14.112609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:07.433 pt1 00:10:07.433 10:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.433 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:07.433 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:07.433 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:07.433 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.433 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.433 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.433 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:07.433 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.433 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.433 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.433 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.433 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.433 10:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.433 10:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.433 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:07.433 10:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.433 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.433 "name": "raid_bdev1", 00:10:07.433 "uuid": "2041319e-d21f-4166-9036-6c248daff2c0", 00:10:07.433 "strip_size_kb": 0, 00:10:07.433 "state": "configuring", 00:10:07.433 "raid_level": "raid1", 00:10:07.433 "superblock": true, 00:10:07.433 "num_base_bdevs": 3, 00:10:07.433 "num_base_bdevs_discovered": 1, 00:10:07.433 "num_base_bdevs_operational": 2, 00:10:07.433 "base_bdevs_list": [ 00:10:07.433 { 00:10:07.433 "name": null, 00:10:07.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.433 "is_configured": false, 00:10:07.433 "data_offset": 2048, 00:10:07.433 "data_size": 63488 00:10:07.433 }, 00:10:07.433 { 00:10:07.433 "name": "pt2", 00:10:07.433 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:07.433 "is_configured": true, 00:10:07.433 "data_offset": 2048, 00:10:07.433 "data_size": 63488 00:10:07.433 }, 00:10:07.433 { 00:10:07.433 "name": null, 00:10:07.433 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:07.433 "is_configured": false, 00:10:07.433 "data_offset": 2048, 00:10:07.433 "data_size": 63488 00:10:07.433 } 00:10:07.433 ] 00:10:07.433 }' 00:10:07.433 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.433 10:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.999 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:07.999 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:07.999 10:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.999 10:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.999 10:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.999 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:07.999 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:07.999 10:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.999 10:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.999 [2024-11-15 10:55:14.652452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:07.999 [2024-11-15 10:55:14.652568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.999 [2024-11-15 10:55:14.652615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:07.999 [2024-11-15 10:55:14.652645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.999 [2024-11-15 10:55:14.653144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.999 [2024-11-15 10:55:14.653198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:07.999 [2024-11-15 10:55:14.653330] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:07.999 [2024-11-15 10:55:14.653406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:07.999 [2024-11-15 10:55:14.653567] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:07.999 [2024-11-15 10:55:14.653603] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:07.999 [2024-11-15 10:55:14.653874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:07.999 [2024-11-15 10:55:14.654076] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:07.999 [2024-11-15 10:55:14.654121] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:07.999 [2024-11-15 10:55:14.654309] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.999 pt3 00:10:07.999 10:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.999 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:07.999 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:07.999 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.999 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.999 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.999 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:07.999 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.999 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.999 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.999 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.999 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:07.999 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.999 10:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.999 10:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.999 10:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.999 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.999 "name": "raid_bdev1", 00:10:07.999 "uuid": "2041319e-d21f-4166-9036-6c248daff2c0", 00:10:07.999 "strip_size_kb": 0, 00:10:07.999 "state": "online", 00:10:07.999 "raid_level": "raid1", 00:10:07.999 "superblock": true, 00:10:07.999 "num_base_bdevs": 3, 00:10:07.999 "num_base_bdevs_discovered": 2, 00:10:07.999 "num_base_bdevs_operational": 2, 00:10:07.999 "base_bdevs_list": [ 00:10:07.999 { 00:10:07.999 "name": null, 00:10:07.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.999 "is_configured": false, 00:10:07.999 "data_offset": 2048, 00:10:07.999 "data_size": 63488 00:10:07.999 }, 00:10:07.999 { 00:10:07.999 "name": "pt2", 00:10:07.999 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:07.999 "is_configured": true, 00:10:07.999 "data_offset": 2048, 00:10:07.999 "data_size": 63488 00:10:07.999 }, 00:10:07.999 { 00:10:07.999 "name": "pt3", 00:10:07.999 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:07.999 "is_configured": true, 00:10:07.999 "data_offset": 2048, 00:10:07.999 "data_size": 63488 00:10:07.999 } 00:10:07.999 ] 00:10:07.999 }' 00:10:07.999 10:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.999 10:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.257 10:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:08.257 10:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:08.257 10:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.257 10:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.257 10:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.257 10:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:08.257 10:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:08.257 10:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.257 10:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.257 10:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:08.257 [2024-11-15 10:55:15.147901] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:08.257 10:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.515 10:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 2041319e-d21f-4166-9036-6c248daff2c0 '!=' 2041319e-d21f-4166-9036-6c248daff2c0 ']' 00:10:08.515 10:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68790 00:10:08.515 10:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 68790 ']' 00:10:08.515 10:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 68790 00:10:08.515 10:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:10:08.515 10:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:08.515 10:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68790 00:10:08.515 killing process with pid 68790 00:10:08.515 10:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:08.515 10:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:08.515 10:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68790' 00:10:08.515 10:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 68790 00:10:08.515 [2024-11-15 10:55:15.230937] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:08.515 [2024-11-15 10:55:15.231041] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:08.515 [2024-11-15 10:55:15.231106] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:08.515 [2024-11-15 10:55:15.231119] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:08.515 10:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 68790 00:10:08.781 [2024-11-15 10:55:15.545914] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:10.166 ************************************ 00:10:10.166 END TEST raid_superblock_test 00:10:10.166 ************************************ 00:10:10.166 10:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:10.166 00:10:10.166 real 0m7.994s 00:10:10.166 user 0m12.555s 00:10:10.166 sys 0m1.415s 00:10:10.166 10:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:10.166 10:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.166 10:55:16 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:10.166 10:55:16 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:10.166 10:55:16 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:10.166 10:55:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:10.166 ************************************ 00:10:10.166 START TEST raid_read_error_test 00:10:10.166 ************************************ 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 read 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WBJduklrhV 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69236 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69236 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 69236 ']' 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:10.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:10.166 10:55:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.166 [2024-11-15 10:55:16.841243] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:10:10.166 [2024-11-15 10:55:16.841481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69236 ] 00:10:10.166 [2024-11-15 10:55:16.996371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.425 [2024-11-15 10:55:17.111855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.425 [2024-11-15 10:55:17.321593] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.425 [2024-11-15 10:55:17.321642] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.993 BaseBdev1_malloc 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.993 true 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.993 [2024-11-15 10:55:17.757134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:10.993 [2024-11-15 10:55:17.757268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.993 [2024-11-15 10:55:17.757325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:10.993 [2024-11-15 10:55:17.757362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.993 [2024-11-15 10:55:17.759716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.993 [2024-11-15 10:55:17.759814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:10.993 BaseBdev1 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.993 BaseBdev2_malloc 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.993 true 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.993 [2024-11-15 10:55:17.827054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:10.993 [2024-11-15 10:55:17.827163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.993 [2024-11-15 10:55:17.827217] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:10.993 [2024-11-15 10:55:17.827251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.993 [2024-11-15 10:55:17.829682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.993 [2024-11-15 10:55:17.829779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:10.993 BaseBdev2 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.993 BaseBdev3_malloc 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.993 true 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.993 [2024-11-15 10:55:17.908462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:10.993 [2024-11-15 10:55:17.908573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.993 [2024-11-15 10:55:17.908614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:10.993 [2024-11-15 10:55:17.908648] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.993 [2024-11-15 10:55:17.910968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.993 [2024-11-15 10:55:17.911061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:10.993 BaseBdev3 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.993 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.252 [2024-11-15 10:55:17.920514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:11.252 [2024-11-15 10:55:17.922545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:11.252 [2024-11-15 10:55:17.922660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:11.252 [2024-11-15 10:55:17.922934] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:11.252 [2024-11-15 10:55:17.922987] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:11.252 [2024-11-15 10:55:17.923314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:11.252 [2024-11-15 10:55:17.923549] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:11.252 [2024-11-15 10:55:17.923601] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:11.252 [2024-11-15 10:55:17.923819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.252 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.252 10:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:11.252 10:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.252 10:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.252 10:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.252 10:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.252 10:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.252 10:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.252 10:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.252 10:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.252 10:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.252 10:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.252 10:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.252 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.252 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.252 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.252 10:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.252 "name": "raid_bdev1", 00:10:11.252 "uuid": "d57e9d3f-0fc8-44f0-86c4-b4a8ef6efee9", 00:10:11.252 "strip_size_kb": 0, 00:10:11.252 "state": "online", 00:10:11.252 "raid_level": "raid1", 00:10:11.252 "superblock": true, 00:10:11.252 "num_base_bdevs": 3, 00:10:11.252 "num_base_bdevs_discovered": 3, 00:10:11.252 "num_base_bdevs_operational": 3, 00:10:11.252 "base_bdevs_list": [ 00:10:11.252 { 00:10:11.252 "name": "BaseBdev1", 00:10:11.252 "uuid": "7f545983-30f1-5a8a-9720-94123b240bbe", 00:10:11.252 "is_configured": true, 00:10:11.252 "data_offset": 2048, 00:10:11.252 "data_size": 63488 00:10:11.252 }, 00:10:11.252 { 00:10:11.252 "name": "BaseBdev2", 00:10:11.252 "uuid": "0e148b6f-ffe9-512f-9a23-2a0f5e546870", 00:10:11.252 "is_configured": true, 00:10:11.252 "data_offset": 2048, 00:10:11.252 "data_size": 63488 00:10:11.252 }, 00:10:11.252 { 00:10:11.252 "name": "BaseBdev3", 00:10:11.252 "uuid": "e508b72d-47b5-5a2d-9c29-2c66a3a075c0", 00:10:11.252 "is_configured": true, 00:10:11.252 "data_offset": 2048, 00:10:11.252 "data_size": 63488 00:10:11.252 } 00:10:11.252 ] 00:10:11.252 }' 00:10:11.252 10:55:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.252 10:55:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.511 10:55:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:11.511 10:55:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:11.770 [2024-11-15 10:55:18.477099] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:12.704 10:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:12.704 10:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.704 10:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.704 10:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.704 10:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:12.704 10:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:12.704 10:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:12.704 10:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:12.704 10:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:12.704 10:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.704 10:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.704 10:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.704 10:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.704 10:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.704 10:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.704 10:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.704 10:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.704 10:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.704 10:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.704 10:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.704 10:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.704 10:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.704 10:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.704 10:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.704 "name": "raid_bdev1", 00:10:12.704 "uuid": "d57e9d3f-0fc8-44f0-86c4-b4a8ef6efee9", 00:10:12.704 "strip_size_kb": 0, 00:10:12.704 "state": "online", 00:10:12.704 "raid_level": "raid1", 00:10:12.704 "superblock": true, 00:10:12.704 "num_base_bdevs": 3, 00:10:12.704 "num_base_bdevs_discovered": 3, 00:10:12.704 "num_base_bdevs_operational": 3, 00:10:12.704 "base_bdevs_list": [ 00:10:12.704 { 00:10:12.704 "name": "BaseBdev1", 00:10:12.704 "uuid": "7f545983-30f1-5a8a-9720-94123b240bbe", 00:10:12.704 "is_configured": true, 00:10:12.704 "data_offset": 2048, 00:10:12.704 "data_size": 63488 00:10:12.704 }, 00:10:12.704 { 00:10:12.704 "name": "BaseBdev2", 00:10:12.704 "uuid": "0e148b6f-ffe9-512f-9a23-2a0f5e546870", 00:10:12.704 "is_configured": true, 00:10:12.704 "data_offset": 2048, 00:10:12.704 "data_size": 63488 00:10:12.704 }, 00:10:12.704 { 00:10:12.704 "name": "BaseBdev3", 00:10:12.704 "uuid": "e508b72d-47b5-5a2d-9c29-2c66a3a075c0", 00:10:12.704 "is_configured": true, 00:10:12.704 "data_offset": 2048, 00:10:12.704 "data_size": 63488 00:10:12.704 } 00:10:12.704 ] 00:10:12.704 }' 00:10:12.704 10:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.704 10:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.963 10:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:12.963 10:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.963 10:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.963 [2024-11-15 10:55:19.825109] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:12.963 [2024-11-15 10:55:19.825229] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:12.963 [2024-11-15 10:55:19.828245] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:12.963 [2024-11-15 10:55:19.828354] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.963 [2024-11-15 10:55:19.828498] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:12.963 [2024-11-15 10:55:19.828549] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:12.963 { 00:10:12.963 "results": [ 00:10:12.963 { 00:10:12.963 "job": "raid_bdev1", 00:10:12.963 "core_mask": "0x1", 00:10:12.963 "workload": "randrw", 00:10:12.963 "percentage": 50, 00:10:12.963 "status": "finished", 00:10:12.963 "queue_depth": 1, 00:10:12.963 "io_size": 131072, 00:10:12.963 "runtime": 1.348762, 00:10:12.963 "iops": 12278.66740017883, 00:10:12.963 "mibps": 1534.8334250223538, 00:10:12.963 "io_failed": 0, 00:10:12.963 "io_timeout": 0, 00:10:12.963 "avg_latency_us": 78.44011107276025, 00:10:12.963 "min_latency_us": 24.929257641921396, 00:10:12.963 "max_latency_us": 1709.9458515283843 00:10:12.963 } 00:10:12.963 ], 00:10:12.963 "core_count": 1 00:10:12.963 } 00:10:12.963 10:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.963 10:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69236 00:10:12.963 10:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 69236 ']' 00:10:12.963 10:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 69236 00:10:12.963 10:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:10:12.963 10:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:12.963 10:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69236 00:10:12.963 killing process with pid 69236 00:10:12.963 10:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:12.963 10:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:12.963 10:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69236' 00:10:12.963 10:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 69236 00:10:12.963 [2024-11-15 10:55:19.870012] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:12.963 10:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 69236 00:10:13.221 [2024-11-15 10:55:20.134237] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:14.597 10:55:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:14.597 10:55:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WBJduklrhV 00:10:14.597 10:55:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:14.597 10:55:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:14.597 10:55:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:14.597 10:55:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:14.597 10:55:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:14.597 10:55:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:14.597 00:10:14.597 real 0m4.642s 00:10:14.597 user 0m5.528s 00:10:14.597 sys 0m0.540s 00:10:14.597 10:55:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:14.597 ************************************ 00:10:14.597 END TEST raid_read_error_test 00:10:14.597 ************************************ 00:10:14.597 10:55:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.597 10:55:21 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:14.597 10:55:21 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:14.597 10:55:21 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:14.597 10:55:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:14.597 ************************************ 00:10:14.597 START TEST raid_write_error_test 00:10:14.597 ************************************ 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 write 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MbWRDmBtxn 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69382 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69382 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 69382 ']' 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:14.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:14.597 10:55:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.856 [2024-11-15 10:55:21.547630] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:10:14.856 [2024-11-15 10:55:21.547845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69382 ] 00:10:14.856 [2024-11-15 10:55:21.723580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.114 [2024-11-15 10:55:21.857102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.372 [2024-11-15 10:55:22.067420] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.372 [2024-11-15 10:55:22.067472] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.631 BaseBdev1_malloc 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.631 true 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.631 [2024-11-15 10:55:22.458893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:15.631 [2024-11-15 10:55:22.458998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.631 [2024-11-15 10:55:22.459037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:15.631 [2024-11-15 10:55:22.459068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.631 [2024-11-15 10:55:22.461423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.631 [2024-11-15 10:55:22.461498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:15.631 BaseBdev1 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.631 BaseBdev2_malloc 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.631 true 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.631 [2024-11-15 10:55:22.526350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:15.631 [2024-11-15 10:55:22.526484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.631 [2024-11-15 10:55:22.526539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:15.631 [2024-11-15 10:55:22.526594] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.631 [2024-11-15 10:55:22.529487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.631 [2024-11-15 10:55:22.529586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:15.631 BaseBdev2 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.631 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.890 BaseBdev3_malloc 00:10:15.890 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.890 10:55:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:15.890 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.890 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.890 true 00:10:15.890 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.890 10:55:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:15.890 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.890 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.890 [2024-11-15 10:55:22.604701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:15.890 [2024-11-15 10:55:22.604757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.890 [2024-11-15 10:55:22.604775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:15.890 [2024-11-15 10:55:22.604787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.890 [2024-11-15 10:55:22.606890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.890 [2024-11-15 10:55:22.606970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:15.890 BaseBdev3 00:10:15.890 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.890 10:55:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:15.890 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.890 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.890 [2024-11-15 10:55:22.616728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:15.890 [2024-11-15 10:55:22.618556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.890 [2024-11-15 10:55:22.618631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:15.890 [2024-11-15 10:55:22.618838] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:15.890 [2024-11-15 10:55:22.618851] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:15.890 [2024-11-15 10:55:22.619087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:15.890 [2024-11-15 10:55:22.619260] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:15.890 [2024-11-15 10:55:22.619272] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:15.890 [2024-11-15 10:55:22.619436] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.890 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.890 10:55:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:15.890 10:55:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:15.890 10:55:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.890 10:55:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.890 10:55:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.890 10:55:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.891 10:55:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.891 10:55:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.891 10:55:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.891 10:55:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.891 10:55:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.891 10:55:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.891 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.891 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.891 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.891 10:55:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.891 "name": "raid_bdev1", 00:10:15.891 "uuid": "fb5461fc-a15a-41a9-9bd2-c9d97a4bcae0", 00:10:15.891 "strip_size_kb": 0, 00:10:15.891 "state": "online", 00:10:15.891 "raid_level": "raid1", 00:10:15.891 "superblock": true, 00:10:15.891 "num_base_bdevs": 3, 00:10:15.891 "num_base_bdevs_discovered": 3, 00:10:15.891 "num_base_bdevs_operational": 3, 00:10:15.891 "base_bdevs_list": [ 00:10:15.891 { 00:10:15.891 "name": "BaseBdev1", 00:10:15.891 "uuid": "0e80de1a-64f9-5018-9bfa-39f9ceb4485f", 00:10:15.891 "is_configured": true, 00:10:15.891 "data_offset": 2048, 00:10:15.891 "data_size": 63488 00:10:15.891 }, 00:10:15.891 { 00:10:15.891 "name": "BaseBdev2", 00:10:15.891 "uuid": "6db110e1-ca96-5e88-b83b-9546a0146e68", 00:10:15.891 "is_configured": true, 00:10:15.891 "data_offset": 2048, 00:10:15.891 "data_size": 63488 00:10:15.891 }, 00:10:15.891 { 00:10:15.891 "name": "BaseBdev3", 00:10:15.891 "uuid": "ad7a7226-4d58-5b9a-a8ff-0b4ce325e43a", 00:10:15.891 "is_configured": true, 00:10:15.891 "data_offset": 2048, 00:10:15.891 "data_size": 63488 00:10:15.891 } 00:10:15.891 ] 00:10:15.891 }' 00:10:15.891 10:55:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.891 10:55:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.149 10:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:16.149 10:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:16.407 [2024-11-15 10:55:23.165541] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:17.340 10:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:17.340 10:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.340 10:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.340 [2024-11-15 10:55:24.077331] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:17.340 [2024-11-15 10:55:24.077471] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:17.340 [2024-11-15 10:55:24.077725] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:10:17.340 10:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.340 10:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:17.340 10:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:17.340 10:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:17.340 10:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:17.340 10:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:17.340 10:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.340 10:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.340 10:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.340 10:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.340 10:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:17.340 10:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.340 10:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.340 10:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.340 10:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.340 10:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.340 10:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.340 10:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.340 10:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.340 10:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.340 10:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.340 "name": "raid_bdev1", 00:10:17.340 "uuid": "fb5461fc-a15a-41a9-9bd2-c9d97a4bcae0", 00:10:17.340 "strip_size_kb": 0, 00:10:17.340 "state": "online", 00:10:17.340 "raid_level": "raid1", 00:10:17.340 "superblock": true, 00:10:17.340 "num_base_bdevs": 3, 00:10:17.340 "num_base_bdevs_discovered": 2, 00:10:17.340 "num_base_bdevs_operational": 2, 00:10:17.340 "base_bdevs_list": [ 00:10:17.340 { 00:10:17.340 "name": null, 00:10:17.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.340 "is_configured": false, 00:10:17.340 "data_offset": 0, 00:10:17.340 "data_size": 63488 00:10:17.340 }, 00:10:17.340 { 00:10:17.340 "name": "BaseBdev2", 00:10:17.340 "uuid": "6db110e1-ca96-5e88-b83b-9546a0146e68", 00:10:17.340 "is_configured": true, 00:10:17.340 "data_offset": 2048, 00:10:17.341 "data_size": 63488 00:10:17.341 }, 00:10:17.341 { 00:10:17.341 "name": "BaseBdev3", 00:10:17.341 "uuid": "ad7a7226-4d58-5b9a-a8ff-0b4ce325e43a", 00:10:17.341 "is_configured": true, 00:10:17.341 "data_offset": 2048, 00:10:17.341 "data_size": 63488 00:10:17.341 } 00:10:17.341 ] 00:10:17.341 }' 00:10:17.341 10:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.341 10:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.599 10:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:17.599 10:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.599 10:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.599 [2024-11-15 10:55:24.523719] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:17.599 [2024-11-15 10:55:24.523829] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.857 [2024-11-15 10:55:24.526921] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.857 [2024-11-15 10:55:24.527040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.857 [2024-11-15 10:55:24.527146] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.857 [2024-11-15 10:55:24.527200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:17.857 { 00:10:17.857 "results": [ 00:10:17.857 { 00:10:17.857 "job": "raid_bdev1", 00:10:17.857 "core_mask": "0x1", 00:10:17.857 "workload": "randrw", 00:10:17.857 "percentage": 50, 00:10:17.857 "status": "finished", 00:10:17.857 "queue_depth": 1, 00:10:17.857 "io_size": 131072, 00:10:17.857 "runtime": 1.358498, 00:10:17.857 "iops": 13974.256863094388, 00:10:17.857 "mibps": 1746.7821078867985, 00:10:17.857 "io_failed": 0, 00:10:17.857 "io_timeout": 0, 00:10:17.857 "avg_latency_us": 68.63798813802292, 00:10:17.857 "min_latency_us": 24.593886462882097, 00:10:17.857 "max_latency_us": 1659.8637554585152 00:10:17.857 } 00:10:17.857 ], 00:10:17.857 "core_count": 1 00:10:17.857 } 00:10:17.857 10:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.857 10:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69382 00:10:17.857 10:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 69382 ']' 00:10:17.857 10:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 69382 00:10:17.857 10:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:10:17.857 10:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:17.857 10:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69382 00:10:17.857 killing process with pid 69382 00:10:17.857 10:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:17.857 10:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:17.857 10:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69382' 00:10:17.857 10:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 69382 00:10:17.857 [2024-11-15 10:55:24.570714] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:17.857 10:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 69382 00:10:18.115 [2024-11-15 10:55:24.820220] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:19.490 10:55:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:19.490 10:55:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:19.490 10:55:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MbWRDmBtxn 00:10:19.490 ************************************ 00:10:19.490 END TEST raid_write_error_test 00:10:19.490 ************************************ 00:10:19.490 10:55:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:19.490 10:55:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:19.490 10:55:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:19.490 10:55:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:19.490 10:55:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:19.490 00:10:19.490 real 0m4.624s 00:10:19.490 user 0m5.478s 00:10:19.490 sys 0m0.583s 00:10:19.490 10:55:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:19.490 10:55:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.490 10:55:26 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:19.490 10:55:26 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:19.490 10:55:26 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:19.490 10:55:26 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:19.490 10:55:26 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:19.490 10:55:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:19.490 ************************************ 00:10:19.490 START TEST raid_state_function_test 00:10:19.490 ************************************ 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 false 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69525 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69525' 00:10:19.490 Process raid pid: 69525 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69525 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 69525 ']' 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:19.490 10:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.490 [2024-11-15 10:55:26.232792] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:10:19.490 [2024-11-15 10:55:26.233002] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.490 [2024-11-15 10:55:26.406736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.749 [2024-11-15 10:55:26.527856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.008 [2024-11-15 10:55:26.747068] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.008 [2024-11-15 10:55:26.747108] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.267 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:20.267 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:10:20.267 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:20.267 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.267 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.267 [2024-11-15 10:55:27.101226] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:20.267 [2024-11-15 10:55:27.101286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:20.267 [2024-11-15 10:55:27.101297] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:20.267 [2024-11-15 10:55:27.101317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:20.267 [2024-11-15 10:55:27.101324] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:20.267 [2024-11-15 10:55:27.101333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:20.267 [2024-11-15 10:55:27.101340] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:20.267 [2024-11-15 10:55:27.101348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:20.267 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.267 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:20.267 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.267 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.267 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.267 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.267 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.267 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.267 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.267 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.267 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.267 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.267 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.267 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.267 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.267 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.267 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.267 "name": "Existed_Raid", 00:10:20.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.267 "strip_size_kb": 64, 00:10:20.267 "state": "configuring", 00:10:20.267 "raid_level": "raid0", 00:10:20.267 "superblock": false, 00:10:20.267 "num_base_bdevs": 4, 00:10:20.267 "num_base_bdevs_discovered": 0, 00:10:20.267 "num_base_bdevs_operational": 4, 00:10:20.267 "base_bdevs_list": [ 00:10:20.267 { 00:10:20.267 "name": "BaseBdev1", 00:10:20.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.267 "is_configured": false, 00:10:20.267 "data_offset": 0, 00:10:20.267 "data_size": 0 00:10:20.267 }, 00:10:20.267 { 00:10:20.267 "name": "BaseBdev2", 00:10:20.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.267 "is_configured": false, 00:10:20.267 "data_offset": 0, 00:10:20.267 "data_size": 0 00:10:20.267 }, 00:10:20.267 { 00:10:20.267 "name": "BaseBdev3", 00:10:20.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.267 "is_configured": false, 00:10:20.267 "data_offset": 0, 00:10:20.267 "data_size": 0 00:10:20.267 }, 00:10:20.267 { 00:10:20.267 "name": "BaseBdev4", 00:10:20.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.267 "is_configured": false, 00:10:20.267 "data_offset": 0, 00:10:20.267 "data_size": 0 00:10:20.267 } 00:10:20.267 ] 00:10:20.267 }' 00:10:20.267 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.267 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.837 [2024-11-15 10:55:27.528475] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:20.837 [2024-11-15 10:55:27.528596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.837 [2024-11-15 10:55:27.540486] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:20.837 [2024-11-15 10:55:27.540588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:20.837 [2024-11-15 10:55:27.540619] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:20.837 [2024-11-15 10:55:27.540642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:20.837 [2024-11-15 10:55:27.540690] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:20.837 [2024-11-15 10:55:27.540717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:20.837 [2024-11-15 10:55:27.540743] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:20.837 [2024-11-15 10:55:27.540797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.837 [2024-11-15 10:55:27.587761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:20.837 BaseBdev1 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.837 [ 00:10:20.837 { 00:10:20.837 "name": "BaseBdev1", 00:10:20.837 "aliases": [ 00:10:20.837 "c07021d8-7bde-4d37-9501-3765ba207239" 00:10:20.837 ], 00:10:20.837 "product_name": "Malloc disk", 00:10:20.837 "block_size": 512, 00:10:20.837 "num_blocks": 65536, 00:10:20.837 "uuid": "c07021d8-7bde-4d37-9501-3765ba207239", 00:10:20.837 "assigned_rate_limits": { 00:10:20.837 "rw_ios_per_sec": 0, 00:10:20.837 "rw_mbytes_per_sec": 0, 00:10:20.837 "r_mbytes_per_sec": 0, 00:10:20.837 "w_mbytes_per_sec": 0 00:10:20.837 }, 00:10:20.837 "claimed": true, 00:10:20.837 "claim_type": "exclusive_write", 00:10:20.837 "zoned": false, 00:10:20.837 "supported_io_types": { 00:10:20.837 "read": true, 00:10:20.837 "write": true, 00:10:20.837 "unmap": true, 00:10:20.837 "flush": true, 00:10:20.837 "reset": true, 00:10:20.837 "nvme_admin": false, 00:10:20.837 "nvme_io": false, 00:10:20.837 "nvme_io_md": false, 00:10:20.837 "write_zeroes": true, 00:10:20.837 "zcopy": true, 00:10:20.837 "get_zone_info": false, 00:10:20.837 "zone_management": false, 00:10:20.837 "zone_append": false, 00:10:20.837 "compare": false, 00:10:20.837 "compare_and_write": false, 00:10:20.837 "abort": true, 00:10:20.837 "seek_hole": false, 00:10:20.837 "seek_data": false, 00:10:20.837 "copy": true, 00:10:20.837 "nvme_iov_md": false 00:10:20.837 }, 00:10:20.837 "memory_domains": [ 00:10:20.837 { 00:10:20.837 "dma_device_id": "system", 00:10:20.837 "dma_device_type": 1 00:10:20.837 }, 00:10:20.837 { 00:10:20.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.837 "dma_device_type": 2 00:10:20.837 } 00:10:20.837 ], 00:10:20.837 "driver_specific": {} 00:10:20.837 } 00:10:20.837 ] 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.837 "name": "Existed_Raid", 00:10:20.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.837 "strip_size_kb": 64, 00:10:20.837 "state": "configuring", 00:10:20.837 "raid_level": "raid0", 00:10:20.837 "superblock": false, 00:10:20.837 "num_base_bdevs": 4, 00:10:20.837 "num_base_bdevs_discovered": 1, 00:10:20.837 "num_base_bdevs_operational": 4, 00:10:20.837 "base_bdevs_list": [ 00:10:20.837 { 00:10:20.837 "name": "BaseBdev1", 00:10:20.837 "uuid": "c07021d8-7bde-4d37-9501-3765ba207239", 00:10:20.837 "is_configured": true, 00:10:20.837 "data_offset": 0, 00:10:20.837 "data_size": 65536 00:10:20.837 }, 00:10:20.837 { 00:10:20.837 "name": "BaseBdev2", 00:10:20.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.837 "is_configured": false, 00:10:20.837 "data_offset": 0, 00:10:20.837 "data_size": 0 00:10:20.837 }, 00:10:20.837 { 00:10:20.837 "name": "BaseBdev3", 00:10:20.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.837 "is_configured": false, 00:10:20.837 "data_offset": 0, 00:10:20.837 "data_size": 0 00:10:20.837 }, 00:10:20.837 { 00:10:20.837 "name": "BaseBdev4", 00:10:20.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.837 "is_configured": false, 00:10:20.837 "data_offset": 0, 00:10:20.837 "data_size": 0 00:10:20.837 } 00:10:20.837 ] 00:10:20.837 }' 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.837 10:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.406 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:21.406 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.406 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.406 [2024-11-15 10:55:28.035065] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:21.406 [2024-11-15 10:55:28.035120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:21.406 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.406 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:21.406 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.406 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.406 [2024-11-15 10:55:28.043090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:21.406 [2024-11-15 10:55:28.045021] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:21.406 [2024-11-15 10:55:28.045129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:21.406 [2024-11-15 10:55:28.045162] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:21.406 [2024-11-15 10:55:28.045176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:21.406 [2024-11-15 10:55:28.045185] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:21.407 [2024-11-15 10:55:28.045195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:21.407 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.407 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:21.407 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:21.407 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:21.407 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.407 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.407 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.407 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.407 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.407 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.407 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.407 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.407 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.407 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.407 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.407 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.407 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.407 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.407 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.407 "name": "Existed_Raid", 00:10:21.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.407 "strip_size_kb": 64, 00:10:21.407 "state": "configuring", 00:10:21.407 "raid_level": "raid0", 00:10:21.407 "superblock": false, 00:10:21.407 "num_base_bdevs": 4, 00:10:21.407 "num_base_bdevs_discovered": 1, 00:10:21.407 "num_base_bdevs_operational": 4, 00:10:21.407 "base_bdevs_list": [ 00:10:21.407 { 00:10:21.407 "name": "BaseBdev1", 00:10:21.407 "uuid": "c07021d8-7bde-4d37-9501-3765ba207239", 00:10:21.407 "is_configured": true, 00:10:21.407 "data_offset": 0, 00:10:21.407 "data_size": 65536 00:10:21.407 }, 00:10:21.407 { 00:10:21.407 "name": "BaseBdev2", 00:10:21.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.407 "is_configured": false, 00:10:21.407 "data_offset": 0, 00:10:21.407 "data_size": 0 00:10:21.407 }, 00:10:21.407 { 00:10:21.407 "name": "BaseBdev3", 00:10:21.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.407 "is_configured": false, 00:10:21.407 "data_offset": 0, 00:10:21.407 "data_size": 0 00:10:21.407 }, 00:10:21.407 { 00:10:21.407 "name": "BaseBdev4", 00:10:21.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.407 "is_configured": false, 00:10:21.407 "data_offset": 0, 00:10:21.407 "data_size": 0 00:10:21.407 } 00:10:21.407 ] 00:10:21.407 }' 00:10:21.407 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.407 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.667 [2024-11-15 10:55:28.507896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:21.667 BaseBdev2 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.667 [ 00:10:21.667 { 00:10:21.667 "name": "BaseBdev2", 00:10:21.667 "aliases": [ 00:10:21.667 "6cc488b4-5b9b-4dfb-9103-13994bc1d0ef" 00:10:21.667 ], 00:10:21.667 "product_name": "Malloc disk", 00:10:21.667 "block_size": 512, 00:10:21.667 "num_blocks": 65536, 00:10:21.667 "uuid": "6cc488b4-5b9b-4dfb-9103-13994bc1d0ef", 00:10:21.667 "assigned_rate_limits": { 00:10:21.667 "rw_ios_per_sec": 0, 00:10:21.667 "rw_mbytes_per_sec": 0, 00:10:21.667 "r_mbytes_per_sec": 0, 00:10:21.667 "w_mbytes_per_sec": 0 00:10:21.667 }, 00:10:21.667 "claimed": true, 00:10:21.667 "claim_type": "exclusive_write", 00:10:21.667 "zoned": false, 00:10:21.667 "supported_io_types": { 00:10:21.667 "read": true, 00:10:21.667 "write": true, 00:10:21.667 "unmap": true, 00:10:21.667 "flush": true, 00:10:21.667 "reset": true, 00:10:21.667 "nvme_admin": false, 00:10:21.667 "nvme_io": false, 00:10:21.667 "nvme_io_md": false, 00:10:21.667 "write_zeroes": true, 00:10:21.667 "zcopy": true, 00:10:21.667 "get_zone_info": false, 00:10:21.667 "zone_management": false, 00:10:21.667 "zone_append": false, 00:10:21.667 "compare": false, 00:10:21.667 "compare_and_write": false, 00:10:21.667 "abort": true, 00:10:21.667 "seek_hole": false, 00:10:21.667 "seek_data": false, 00:10:21.667 "copy": true, 00:10:21.667 "nvme_iov_md": false 00:10:21.667 }, 00:10:21.667 "memory_domains": [ 00:10:21.667 { 00:10:21.667 "dma_device_id": "system", 00:10:21.667 "dma_device_type": 1 00:10:21.667 }, 00:10:21.667 { 00:10:21.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.667 "dma_device_type": 2 00:10:21.667 } 00:10:21.667 ], 00:10:21.667 "driver_specific": {} 00:10:21.667 } 00:10:21.667 ] 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.667 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.927 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.927 "name": "Existed_Raid", 00:10:21.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.927 "strip_size_kb": 64, 00:10:21.927 "state": "configuring", 00:10:21.927 "raid_level": "raid0", 00:10:21.927 "superblock": false, 00:10:21.927 "num_base_bdevs": 4, 00:10:21.927 "num_base_bdevs_discovered": 2, 00:10:21.927 "num_base_bdevs_operational": 4, 00:10:21.927 "base_bdevs_list": [ 00:10:21.927 { 00:10:21.927 "name": "BaseBdev1", 00:10:21.927 "uuid": "c07021d8-7bde-4d37-9501-3765ba207239", 00:10:21.927 "is_configured": true, 00:10:21.927 "data_offset": 0, 00:10:21.927 "data_size": 65536 00:10:21.927 }, 00:10:21.927 { 00:10:21.927 "name": "BaseBdev2", 00:10:21.927 "uuid": "6cc488b4-5b9b-4dfb-9103-13994bc1d0ef", 00:10:21.927 "is_configured": true, 00:10:21.927 "data_offset": 0, 00:10:21.927 "data_size": 65536 00:10:21.927 }, 00:10:21.927 { 00:10:21.927 "name": "BaseBdev3", 00:10:21.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.927 "is_configured": false, 00:10:21.927 "data_offset": 0, 00:10:21.927 "data_size": 0 00:10:21.927 }, 00:10:21.927 { 00:10:21.927 "name": "BaseBdev4", 00:10:21.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.927 "is_configured": false, 00:10:21.927 "data_offset": 0, 00:10:21.927 "data_size": 0 00:10:21.927 } 00:10:21.927 ] 00:10:21.927 }' 00:10:21.927 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.927 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.184 10:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:22.184 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.184 10:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.184 [2024-11-15 10:55:29.034447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:22.184 BaseBdev3 00:10:22.184 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.184 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:22.184 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:22.184 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:22.184 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:22.184 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:22.184 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:22.184 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:22.184 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.184 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.184 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.184 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:22.184 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.184 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.184 [ 00:10:22.184 { 00:10:22.184 "name": "BaseBdev3", 00:10:22.184 "aliases": [ 00:10:22.184 "d899468e-8b69-4d4f-8c77-09672aba37fb" 00:10:22.184 ], 00:10:22.184 "product_name": "Malloc disk", 00:10:22.184 "block_size": 512, 00:10:22.184 "num_blocks": 65536, 00:10:22.184 "uuid": "d899468e-8b69-4d4f-8c77-09672aba37fb", 00:10:22.184 "assigned_rate_limits": { 00:10:22.184 "rw_ios_per_sec": 0, 00:10:22.184 "rw_mbytes_per_sec": 0, 00:10:22.184 "r_mbytes_per_sec": 0, 00:10:22.184 "w_mbytes_per_sec": 0 00:10:22.184 }, 00:10:22.184 "claimed": true, 00:10:22.184 "claim_type": "exclusive_write", 00:10:22.184 "zoned": false, 00:10:22.184 "supported_io_types": { 00:10:22.184 "read": true, 00:10:22.184 "write": true, 00:10:22.184 "unmap": true, 00:10:22.184 "flush": true, 00:10:22.184 "reset": true, 00:10:22.184 "nvme_admin": false, 00:10:22.184 "nvme_io": false, 00:10:22.184 "nvme_io_md": false, 00:10:22.184 "write_zeroes": true, 00:10:22.184 "zcopy": true, 00:10:22.184 "get_zone_info": false, 00:10:22.184 "zone_management": false, 00:10:22.184 "zone_append": false, 00:10:22.184 "compare": false, 00:10:22.184 "compare_and_write": false, 00:10:22.184 "abort": true, 00:10:22.184 "seek_hole": false, 00:10:22.184 "seek_data": false, 00:10:22.184 "copy": true, 00:10:22.184 "nvme_iov_md": false 00:10:22.184 }, 00:10:22.184 "memory_domains": [ 00:10:22.184 { 00:10:22.184 "dma_device_id": "system", 00:10:22.184 "dma_device_type": 1 00:10:22.184 }, 00:10:22.184 { 00:10:22.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.184 "dma_device_type": 2 00:10:22.184 } 00:10:22.184 ], 00:10:22.184 "driver_specific": {} 00:10:22.185 } 00:10:22.185 ] 00:10:22.185 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.185 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:22.185 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:22.185 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:22.185 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:22.185 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.185 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.185 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.185 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.185 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.185 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.185 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.185 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.185 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.185 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.185 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.185 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.185 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.185 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.185 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.185 "name": "Existed_Raid", 00:10:22.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.185 "strip_size_kb": 64, 00:10:22.185 "state": "configuring", 00:10:22.185 "raid_level": "raid0", 00:10:22.185 "superblock": false, 00:10:22.185 "num_base_bdevs": 4, 00:10:22.185 "num_base_bdevs_discovered": 3, 00:10:22.185 "num_base_bdevs_operational": 4, 00:10:22.185 "base_bdevs_list": [ 00:10:22.185 { 00:10:22.185 "name": "BaseBdev1", 00:10:22.185 "uuid": "c07021d8-7bde-4d37-9501-3765ba207239", 00:10:22.185 "is_configured": true, 00:10:22.185 "data_offset": 0, 00:10:22.185 "data_size": 65536 00:10:22.185 }, 00:10:22.185 { 00:10:22.185 "name": "BaseBdev2", 00:10:22.185 "uuid": "6cc488b4-5b9b-4dfb-9103-13994bc1d0ef", 00:10:22.185 "is_configured": true, 00:10:22.185 "data_offset": 0, 00:10:22.185 "data_size": 65536 00:10:22.185 }, 00:10:22.185 { 00:10:22.185 "name": "BaseBdev3", 00:10:22.185 "uuid": "d899468e-8b69-4d4f-8c77-09672aba37fb", 00:10:22.185 "is_configured": true, 00:10:22.185 "data_offset": 0, 00:10:22.185 "data_size": 65536 00:10:22.185 }, 00:10:22.185 { 00:10:22.185 "name": "BaseBdev4", 00:10:22.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.185 "is_configured": false, 00:10:22.185 "data_offset": 0, 00:10:22.185 "data_size": 0 00:10:22.185 } 00:10:22.185 ] 00:10:22.185 }' 00:10:22.185 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.185 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.750 [2024-11-15 10:55:29.525690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:22.750 [2024-11-15 10:55:29.525822] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:22.750 [2024-11-15 10:55:29.525849] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:22.750 [2024-11-15 10:55:29.526136] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:22.750 [2024-11-15 10:55:29.526355] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:22.750 [2024-11-15 10:55:29.526403] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:22.750 [2024-11-15 10:55:29.526739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.750 BaseBdev4 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.750 [ 00:10:22.750 { 00:10:22.750 "name": "BaseBdev4", 00:10:22.750 "aliases": [ 00:10:22.750 "5dca9793-dec9-44e5-995d-e4d10ed3d6eb" 00:10:22.750 ], 00:10:22.750 "product_name": "Malloc disk", 00:10:22.750 "block_size": 512, 00:10:22.750 "num_blocks": 65536, 00:10:22.750 "uuid": "5dca9793-dec9-44e5-995d-e4d10ed3d6eb", 00:10:22.750 "assigned_rate_limits": { 00:10:22.750 "rw_ios_per_sec": 0, 00:10:22.750 "rw_mbytes_per_sec": 0, 00:10:22.750 "r_mbytes_per_sec": 0, 00:10:22.750 "w_mbytes_per_sec": 0 00:10:22.750 }, 00:10:22.750 "claimed": true, 00:10:22.750 "claim_type": "exclusive_write", 00:10:22.750 "zoned": false, 00:10:22.750 "supported_io_types": { 00:10:22.750 "read": true, 00:10:22.750 "write": true, 00:10:22.750 "unmap": true, 00:10:22.750 "flush": true, 00:10:22.750 "reset": true, 00:10:22.750 "nvme_admin": false, 00:10:22.750 "nvme_io": false, 00:10:22.750 "nvme_io_md": false, 00:10:22.750 "write_zeroes": true, 00:10:22.750 "zcopy": true, 00:10:22.750 "get_zone_info": false, 00:10:22.750 "zone_management": false, 00:10:22.750 "zone_append": false, 00:10:22.750 "compare": false, 00:10:22.750 "compare_and_write": false, 00:10:22.750 "abort": true, 00:10:22.750 "seek_hole": false, 00:10:22.750 "seek_data": false, 00:10:22.750 "copy": true, 00:10:22.750 "nvme_iov_md": false 00:10:22.750 }, 00:10:22.750 "memory_domains": [ 00:10:22.750 { 00:10:22.750 "dma_device_id": "system", 00:10:22.750 "dma_device_type": 1 00:10:22.750 }, 00:10:22.750 { 00:10:22.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.750 "dma_device_type": 2 00:10:22.750 } 00:10:22.750 ], 00:10:22.750 "driver_specific": {} 00:10:22.750 } 00:10:22.750 ] 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.750 "name": "Existed_Raid", 00:10:22.750 "uuid": "09abe86f-7071-46d6-af89-8b7c7db5bec2", 00:10:22.750 "strip_size_kb": 64, 00:10:22.750 "state": "online", 00:10:22.750 "raid_level": "raid0", 00:10:22.750 "superblock": false, 00:10:22.750 "num_base_bdevs": 4, 00:10:22.750 "num_base_bdevs_discovered": 4, 00:10:22.750 "num_base_bdevs_operational": 4, 00:10:22.750 "base_bdevs_list": [ 00:10:22.750 { 00:10:22.750 "name": "BaseBdev1", 00:10:22.750 "uuid": "c07021d8-7bde-4d37-9501-3765ba207239", 00:10:22.750 "is_configured": true, 00:10:22.750 "data_offset": 0, 00:10:22.750 "data_size": 65536 00:10:22.750 }, 00:10:22.750 { 00:10:22.750 "name": "BaseBdev2", 00:10:22.750 "uuid": "6cc488b4-5b9b-4dfb-9103-13994bc1d0ef", 00:10:22.750 "is_configured": true, 00:10:22.750 "data_offset": 0, 00:10:22.750 "data_size": 65536 00:10:22.750 }, 00:10:22.750 { 00:10:22.750 "name": "BaseBdev3", 00:10:22.750 "uuid": "d899468e-8b69-4d4f-8c77-09672aba37fb", 00:10:22.750 "is_configured": true, 00:10:22.750 "data_offset": 0, 00:10:22.750 "data_size": 65536 00:10:22.750 }, 00:10:22.750 { 00:10:22.750 "name": "BaseBdev4", 00:10:22.750 "uuid": "5dca9793-dec9-44e5-995d-e4d10ed3d6eb", 00:10:22.750 "is_configured": true, 00:10:22.750 "data_offset": 0, 00:10:22.750 "data_size": 65536 00:10:22.750 } 00:10:22.750 ] 00:10:22.750 }' 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.750 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.315 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:23.315 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:23.315 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:23.315 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:23.315 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:23.315 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:23.315 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:23.315 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.315 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.315 10:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:23.315 [2024-11-15 10:55:29.985370] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.315 10:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.315 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:23.315 "name": "Existed_Raid", 00:10:23.315 "aliases": [ 00:10:23.315 "09abe86f-7071-46d6-af89-8b7c7db5bec2" 00:10:23.315 ], 00:10:23.315 "product_name": "Raid Volume", 00:10:23.315 "block_size": 512, 00:10:23.315 "num_blocks": 262144, 00:10:23.315 "uuid": "09abe86f-7071-46d6-af89-8b7c7db5bec2", 00:10:23.315 "assigned_rate_limits": { 00:10:23.315 "rw_ios_per_sec": 0, 00:10:23.315 "rw_mbytes_per_sec": 0, 00:10:23.315 "r_mbytes_per_sec": 0, 00:10:23.315 "w_mbytes_per_sec": 0 00:10:23.315 }, 00:10:23.315 "claimed": false, 00:10:23.315 "zoned": false, 00:10:23.315 "supported_io_types": { 00:10:23.315 "read": true, 00:10:23.315 "write": true, 00:10:23.315 "unmap": true, 00:10:23.315 "flush": true, 00:10:23.315 "reset": true, 00:10:23.315 "nvme_admin": false, 00:10:23.315 "nvme_io": false, 00:10:23.315 "nvme_io_md": false, 00:10:23.315 "write_zeroes": true, 00:10:23.315 "zcopy": false, 00:10:23.315 "get_zone_info": false, 00:10:23.315 "zone_management": false, 00:10:23.315 "zone_append": false, 00:10:23.315 "compare": false, 00:10:23.315 "compare_and_write": false, 00:10:23.315 "abort": false, 00:10:23.315 "seek_hole": false, 00:10:23.315 "seek_data": false, 00:10:23.315 "copy": false, 00:10:23.315 "nvme_iov_md": false 00:10:23.315 }, 00:10:23.315 "memory_domains": [ 00:10:23.315 { 00:10:23.315 "dma_device_id": "system", 00:10:23.315 "dma_device_type": 1 00:10:23.315 }, 00:10:23.315 { 00:10:23.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.315 "dma_device_type": 2 00:10:23.315 }, 00:10:23.315 { 00:10:23.315 "dma_device_id": "system", 00:10:23.315 "dma_device_type": 1 00:10:23.315 }, 00:10:23.315 { 00:10:23.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.315 "dma_device_type": 2 00:10:23.315 }, 00:10:23.316 { 00:10:23.316 "dma_device_id": "system", 00:10:23.316 "dma_device_type": 1 00:10:23.316 }, 00:10:23.316 { 00:10:23.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.316 "dma_device_type": 2 00:10:23.316 }, 00:10:23.316 { 00:10:23.316 "dma_device_id": "system", 00:10:23.316 "dma_device_type": 1 00:10:23.316 }, 00:10:23.316 { 00:10:23.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.316 "dma_device_type": 2 00:10:23.316 } 00:10:23.316 ], 00:10:23.316 "driver_specific": { 00:10:23.316 "raid": { 00:10:23.316 "uuid": "09abe86f-7071-46d6-af89-8b7c7db5bec2", 00:10:23.316 "strip_size_kb": 64, 00:10:23.316 "state": "online", 00:10:23.316 "raid_level": "raid0", 00:10:23.316 "superblock": false, 00:10:23.316 "num_base_bdevs": 4, 00:10:23.316 "num_base_bdevs_discovered": 4, 00:10:23.316 "num_base_bdevs_operational": 4, 00:10:23.316 "base_bdevs_list": [ 00:10:23.316 { 00:10:23.316 "name": "BaseBdev1", 00:10:23.316 "uuid": "c07021d8-7bde-4d37-9501-3765ba207239", 00:10:23.316 "is_configured": true, 00:10:23.316 "data_offset": 0, 00:10:23.316 "data_size": 65536 00:10:23.316 }, 00:10:23.316 { 00:10:23.316 "name": "BaseBdev2", 00:10:23.316 "uuid": "6cc488b4-5b9b-4dfb-9103-13994bc1d0ef", 00:10:23.316 "is_configured": true, 00:10:23.316 "data_offset": 0, 00:10:23.316 "data_size": 65536 00:10:23.316 }, 00:10:23.316 { 00:10:23.316 "name": "BaseBdev3", 00:10:23.316 "uuid": "d899468e-8b69-4d4f-8c77-09672aba37fb", 00:10:23.316 "is_configured": true, 00:10:23.316 "data_offset": 0, 00:10:23.316 "data_size": 65536 00:10:23.316 }, 00:10:23.316 { 00:10:23.316 "name": "BaseBdev4", 00:10:23.316 "uuid": "5dca9793-dec9-44e5-995d-e4d10ed3d6eb", 00:10:23.316 "is_configured": true, 00:10:23.316 "data_offset": 0, 00:10:23.316 "data_size": 65536 00:10:23.316 } 00:10:23.316 ] 00:10:23.316 } 00:10:23.316 } 00:10:23.316 }' 00:10:23.316 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:23.316 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:23.316 BaseBdev2 00:10:23.316 BaseBdev3 00:10:23.316 BaseBdev4' 00:10:23.316 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.316 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:23.316 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.316 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.316 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:23.316 10:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.316 10:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.316 10:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.316 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.316 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.316 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.316 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.316 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:23.316 10:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.316 10:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.316 10:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.316 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.316 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.316 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.316 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:23.316 10:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.316 10:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.316 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.316 10:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.573 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.574 [2024-11-15 10:55:30.308537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:23.574 [2024-11-15 10:55:30.308619] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:23.574 [2024-11-15 10:55:30.308698] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.574 "name": "Existed_Raid", 00:10:23.574 "uuid": "09abe86f-7071-46d6-af89-8b7c7db5bec2", 00:10:23.574 "strip_size_kb": 64, 00:10:23.574 "state": "offline", 00:10:23.574 "raid_level": "raid0", 00:10:23.574 "superblock": false, 00:10:23.574 "num_base_bdevs": 4, 00:10:23.574 "num_base_bdevs_discovered": 3, 00:10:23.574 "num_base_bdevs_operational": 3, 00:10:23.574 "base_bdevs_list": [ 00:10:23.574 { 00:10:23.574 "name": null, 00:10:23.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.574 "is_configured": false, 00:10:23.574 "data_offset": 0, 00:10:23.574 "data_size": 65536 00:10:23.574 }, 00:10:23.574 { 00:10:23.574 "name": "BaseBdev2", 00:10:23.574 "uuid": "6cc488b4-5b9b-4dfb-9103-13994bc1d0ef", 00:10:23.574 "is_configured": true, 00:10:23.574 "data_offset": 0, 00:10:23.574 "data_size": 65536 00:10:23.574 }, 00:10:23.574 { 00:10:23.574 "name": "BaseBdev3", 00:10:23.574 "uuid": "d899468e-8b69-4d4f-8c77-09672aba37fb", 00:10:23.574 "is_configured": true, 00:10:23.574 "data_offset": 0, 00:10:23.574 "data_size": 65536 00:10:23.574 }, 00:10:23.574 { 00:10:23.574 "name": "BaseBdev4", 00:10:23.574 "uuid": "5dca9793-dec9-44e5-995d-e4d10ed3d6eb", 00:10:23.574 "is_configured": true, 00:10:23.574 "data_offset": 0, 00:10:23.574 "data_size": 65536 00:10:23.574 } 00:10:23.574 ] 00:10:23.574 }' 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.574 10:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.159 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:24.159 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:24.159 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.159 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:24.159 10:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.159 10:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.159 10:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.159 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:24.159 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:24.159 10:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:24.159 10:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.159 10:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.159 [2024-11-15 10:55:30.966967] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:24.159 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.159 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:24.159 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:24.159 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.159 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.159 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:24.159 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.417 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.417 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:24.417 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:24.417 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:24.417 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.417 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.417 [2024-11-15 10:55:31.126877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:24.417 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.417 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:24.417 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:24.417 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.417 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:24.417 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.417 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.417 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.417 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:24.417 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:24.417 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:24.417 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.417 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.417 [2024-11-15 10:55:31.281391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:24.417 [2024-11-15 10:55:31.281439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:24.675 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.675 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:24.675 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:24.675 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.675 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:24.675 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.675 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.675 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.675 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:24.675 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:24.675 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:24.675 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:24.675 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:24.675 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:24.675 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.675 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.675 BaseBdev2 00:10:24.675 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.675 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:24.675 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:24.675 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:24.675 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:24.675 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:24.675 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:24.675 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.676 [ 00:10:24.676 { 00:10:24.676 "name": "BaseBdev2", 00:10:24.676 "aliases": [ 00:10:24.676 "70f5fb91-ef7c-47d7-9db7-9d0829e76ab5" 00:10:24.676 ], 00:10:24.676 "product_name": "Malloc disk", 00:10:24.676 "block_size": 512, 00:10:24.676 "num_blocks": 65536, 00:10:24.676 "uuid": "70f5fb91-ef7c-47d7-9db7-9d0829e76ab5", 00:10:24.676 "assigned_rate_limits": { 00:10:24.676 "rw_ios_per_sec": 0, 00:10:24.676 "rw_mbytes_per_sec": 0, 00:10:24.676 "r_mbytes_per_sec": 0, 00:10:24.676 "w_mbytes_per_sec": 0 00:10:24.676 }, 00:10:24.676 "claimed": false, 00:10:24.676 "zoned": false, 00:10:24.676 "supported_io_types": { 00:10:24.676 "read": true, 00:10:24.676 "write": true, 00:10:24.676 "unmap": true, 00:10:24.676 "flush": true, 00:10:24.676 "reset": true, 00:10:24.676 "nvme_admin": false, 00:10:24.676 "nvme_io": false, 00:10:24.676 "nvme_io_md": false, 00:10:24.676 "write_zeroes": true, 00:10:24.676 "zcopy": true, 00:10:24.676 "get_zone_info": false, 00:10:24.676 "zone_management": false, 00:10:24.676 "zone_append": false, 00:10:24.676 "compare": false, 00:10:24.676 "compare_and_write": false, 00:10:24.676 "abort": true, 00:10:24.676 "seek_hole": false, 00:10:24.676 "seek_data": false, 00:10:24.676 "copy": true, 00:10:24.676 "nvme_iov_md": false 00:10:24.676 }, 00:10:24.676 "memory_domains": [ 00:10:24.676 { 00:10:24.676 "dma_device_id": "system", 00:10:24.676 "dma_device_type": 1 00:10:24.676 }, 00:10:24.676 { 00:10:24.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.676 "dma_device_type": 2 00:10:24.676 } 00:10:24.676 ], 00:10:24.676 "driver_specific": {} 00:10:24.676 } 00:10:24.676 ] 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.676 BaseBdev3 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.676 [ 00:10:24.676 { 00:10:24.676 "name": "BaseBdev3", 00:10:24.676 "aliases": [ 00:10:24.676 "4230839c-9c4c-45d1-9b4a-5d58755bbb05" 00:10:24.676 ], 00:10:24.676 "product_name": "Malloc disk", 00:10:24.676 "block_size": 512, 00:10:24.676 "num_blocks": 65536, 00:10:24.676 "uuid": "4230839c-9c4c-45d1-9b4a-5d58755bbb05", 00:10:24.676 "assigned_rate_limits": { 00:10:24.676 "rw_ios_per_sec": 0, 00:10:24.676 "rw_mbytes_per_sec": 0, 00:10:24.676 "r_mbytes_per_sec": 0, 00:10:24.676 "w_mbytes_per_sec": 0 00:10:24.676 }, 00:10:24.676 "claimed": false, 00:10:24.676 "zoned": false, 00:10:24.676 "supported_io_types": { 00:10:24.676 "read": true, 00:10:24.676 "write": true, 00:10:24.676 "unmap": true, 00:10:24.676 "flush": true, 00:10:24.676 "reset": true, 00:10:24.676 "nvme_admin": false, 00:10:24.676 "nvme_io": false, 00:10:24.676 "nvme_io_md": false, 00:10:24.676 "write_zeroes": true, 00:10:24.676 "zcopy": true, 00:10:24.676 "get_zone_info": false, 00:10:24.676 "zone_management": false, 00:10:24.676 "zone_append": false, 00:10:24.676 "compare": false, 00:10:24.676 "compare_and_write": false, 00:10:24.676 "abort": true, 00:10:24.676 "seek_hole": false, 00:10:24.676 "seek_data": false, 00:10:24.676 "copy": true, 00:10:24.676 "nvme_iov_md": false 00:10:24.676 }, 00:10:24.676 "memory_domains": [ 00:10:24.676 { 00:10:24.676 "dma_device_id": "system", 00:10:24.676 "dma_device_type": 1 00:10:24.676 }, 00:10:24.676 { 00:10:24.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.676 "dma_device_type": 2 00:10:24.676 } 00:10:24.676 ], 00:10:24.676 "driver_specific": {} 00:10:24.676 } 00:10:24.676 ] 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.676 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.934 BaseBdev4 00:10:24.934 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.934 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:24.934 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:10:24.934 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:24.934 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:24.934 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:24.934 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:24.934 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:24.934 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.934 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.934 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.934 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:24.934 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.934 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.934 [ 00:10:24.934 { 00:10:24.934 "name": "BaseBdev4", 00:10:24.934 "aliases": [ 00:10:24.934 "60925f5c-d2be-45fc-94c7-2d7516641493" 00:10:24.934 ], 00:10:24.934 "product_name": "Malloc disk", 00:10:24.934 "block_size": 512, 00:10:24.934 "num_blocks": 65536, 00:10:24.934 "uuid": "60925f5c-d2be-45fc-94c7-2d7516641493", 00:10:24.934 "assigned_rate_limits": { 00:10:24.934 "rw_ios_per_sec": 0, 00:10:24.934 "rw_mbytes_per_sec": 0, 00:10:24.934 "r_mbytes_per_sec": 0, 00:10:24.934 "w_mbytes_per_sec": 0 00:10:24.934 }, 00:10:24.934 "claimed": false, 00:10:24.934 "zoned": false, 00:10:24.934 "supported_io_types": { 00:10:24.934 "read": true, 00:10:24.934 "write": true, 00:10:24.934 "unmap": true, 00:10:24.934 "flush": true, 00:10:24.934 "reset": true, 00:10:24.934 "nvme_admin": false, 00:10:24.934 "nvme_io": false, 00:10:24.934 "nvme_io_md": false, 00:10:24.935 "write_zeroes": true, 00:10:24.935 "zcopy": true, 00:10:24.935 "get_zone_info": false, 00:10:24.935 "zone_management": false, 00:10:24.935 "zone_append": false, 00:10:24.935 "compare": false, 00:10:24.935 "compare_and_write": false, 00:10:24.935 "abort": true, 00:10:24.935 "seek_hole": false, 00:10:24.935 "seek_data": false, 00:10:24.935 "copy": true, 00:10:24.935 "nvme_iov_md": false 00:10:24.935 }, 00:10:24.935 "memory_domains": [ 00:10:24.935 { 00:10:24.935 "dma_device_id": "system", 00:10:24.935 "dma_device_type": 1 00:10:24.935 }, 00:10:24.935 { 00:10:24.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.935 "dma_device_type": 2 00:10:24.935 } 00:10:24.935 ], 00:10:24.935 "driver_specific": {} 00:10:24.935 } 00:10:24.935 ] 00:10:24.935 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.935 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:24.935 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:24.935 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:24.935 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:24.935 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.935 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.935 [2024-11-15 10:55:31.647492] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:24.935 [2024-11-15 10:55:31.647588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:24.935 [2024-11-15 10:55:31.647641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:24.935 [2024-11-15 10:55:31.649784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:24.935 [2024-11-15 10:55:31.649893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:24.935 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.935 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:24.935 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.935 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.935 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.935 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.935 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.935 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.935 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.935 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.935 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.935 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.935 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.935 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.935 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.935 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.935 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.935 "name": "Existed_Raid", 00:10:24.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.935 "strip_size_kb": 64, 00:10:24.935 "state": "configuring", 00:10:24.935 "raid_level": "raid0", 00:10:24.935 "superblock": false, 00:10:24.935 "num_base_bdevs": 4, 00:10:24.935 "num_base_bdevs_discovered": 3, 00:10:24.935 "num_base_bdevs_operational": 4, 00:10:24.935 "base_bdevs_list": [ 00:10:24.935 { 00:10:24.935 "name": "BaseBdev1", 00:10:24.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.935 "is_configured": false, 00:10:24.935 "data_offset": 0, 00:10:24.935 "data_size": 0 00:10:24.935 }, 00:10:24.935 { 00:10:24.935 "name": "BaseBdev2", 00:10:24.935 "uuid": "70f5fb91-ef7c-47d7-9db7-9d0829e76ab5", 00:10:24.935 "is_configured": true, 00:10:24.935 "data_offset": 0, 00:10:24.935 "data_size": 65536 00:10:24.935 }, 00:10:24.935 { 00:10:24.935 "name": "BaseBdev3", 00:10:24.935 "uuid": "4230839c-9c4c-45d1-9b4a-5d58755bbb05", 00:10:24.935 "is_configured": true, 00:10:24.935 "data_offset": 0, 00:10:24.935 "data_size": 65536 00:10:24.935 }, 00:10:24.935 { 00:10:24.935 "name": "BaseBdev4", 00:10:24.935 "uuid": "60925f5c-d2be-45fc-94c7-2d7516641493", 00:10:24.935 "is_configured": true, 00:10:24.935 "data_offset": 0, 00:10:24.935 "data_size": 65536 00:10:24.935 } 00:10:24.935 ] 00:10:24.935 }' 00:10:24.935 10:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.935 10:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.198 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:25.198 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.198 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.198 [2024-11-15 10:55:32.046815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:25.198 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.198 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:25.198 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.198 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.198 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.198 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.198 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.198 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.198 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.198 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.198 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.198 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.198 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.198 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.198 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.198 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.198 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.198 "name": "Existed_Raid", 00:10:25.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.198 "strip_size_kb": 64, 00:10:25.198 "state": "configuring", 00:10:25.198 "raid_level": "raid0", 00:10:25.198 "superblock": false, 00:10:25.198 "num_base_bdevs": 4, 00:10:25.198 "num_base_bdevs_discovered": 2, 00:10:25.198 "num_base_bdevs_operational": 4, 00:10:25.198 "base_bdevs_list": [ 00:10:25.198 { 00:10:25.198 "name": "BaseBdev1", 00:10:25.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.198 "is_configured": false, 00:10:25.198 "data_offset": 0, 00:10:25.198 "data_size": 0 00:10:25.198 }, 00:10:25.198 { 00:10:25.198 "name": null, 00:10:25.198 "uuid": "70f5fb91-ef7c-47d7-9db7-9d0829e76ab5", 00:10:25.198 "is_configured": false, 00:10:25.198 "data_offset": 0, 00:10:25.198 "data_size": 65536 00:10:25.198 }, 00:10:25.198 { 00:10:25.198 "name": "BaseBdev3", 00:10:25.198 "uuid": "4230839c-9c4c-45d1-9b4a-5d58755bbb05", 00:10:25.198 "is_configured": true, 00:10:25.198 "data_offset": 0, 00:10:25.198 "data_size": 65536 00:10:25.198 }, 00:10:25.198 { 00:10:25.198 "name": "BaseBdev4", 00:10:25.198 "uuid": "60925f5c-d2be-45fc-94c7-2d7516641493", 00:10:25.198 "is_configured": true, 00:10:25.198 "data_offset": 0, 00:10:25.198 "data_size": 65536 00:10:25.198 } 00:10:25.198 ] 00:10:25.198 }' 00:10:25.198 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.198 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.766 [2024-11-15 10:55:32.590590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.766 BaseBdev1 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.766 [ 00:10:25.766 { 00:10:25.766 "name": "BaseBdev1", 00:10:25.766 "aliases": [ 00:10:25.766 "3ba03ec0-2c59-4db3-b558-0c24d0344881" 00:10:25.766 ], 00:10:25.766 "product_name": "Malloc disk", 00:10:25.766 "block_size": 512, 00:10:25.766 "num_blocks": 65536, 00:10:25.766 "uuid": "3ba03ec0-2c59-4db3-b558-0c24d0344881", 00:10:25.766 "assigned_rate_limits": { 00:10:25.766 "rw_ios_per_sec": 0, 00:10:25.766 "rw_mbytes_per_sec": 0, 00:10:25.766 "r_mbytes_per_sec": 0, 00:10:25.766 "w_mbytes_per_sec": 0 00:10:25.766 }, 00:10:25.766 "claimed": true, 00:10:25.766 "claim_type": "exclusive_write", 00:10:25.766 "zoned": false, 00:10:25.766 "supported_io_types": { 00:10:25.766 "read": true, 00:10:25.766 "write": true, 00:10:25.766 "unmap": true, 00:10:25.766 "flush": true, 00:10:25.766 "reset": true, 00:10:25.766 "nvme_admin": false, 00:10:25.766 "nvme_io": false, 00:10:25.766 "nvme_io_md": false, 00:10:25.766 "write_zeroes": true, 00:10:25.766 "zcopy": true, 00:10:25.766 "get_zone_info": false, 00:10:25.766 "zone_management": false, 00:10:25.766 "zone_append": false, 00:10:25.766 "compare": false, 00:10:25.766 "compare_and_write": false, 00:10:25.766 "abort": true, 00:10:25.766 "seek_hole": false, 00:10:25.766 "seek_data": false, 00:10:25.766 "copy": true, 00:10:25.766 "nvme_iov_md": false 00:10:25.766 }, 00:10:25.766 "memory_domains": [ 00:10:25.766 { 00:10:25.766 "dma_device_id": "system", 00:10:25.766 "dma_device_type": 1 00:10:25.766 }, 00:10:25.766 { 00:10:25.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.766 "dma_device_type": 2 00:10:25.766 } 00:10:25.766 ], 00:10:25.766 "driver_specific": {} 00:10:25.766 } 00:10:25.766 ] 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.766 "name": "Existed_Raid", 00:10:25.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.766 "strip_size_kb": 64, 00:10:25.766 "state": "configuring", 00:10:25.766 "raid_level": "raid0", 00:10:25.766 "superblock": false, 00:10:25.766 "num_base_bdevs": 4, 00:10:25.766 "num_base_bdevs_discovered": 3, 00:10:25.766 "num_base_bdevs_operational": 4, 00:10:25.766 "base_bdevs_list": [ 00:10:25.766 { 00:10:25.766 "name": "BaseBdev1", 00:10:25.766 "uuid": "3ba03ec0-2c59-4db3-b558-0c24d0344881", 00:10:25.766 "is_configured": true, 00:10:25.766 "data_offset": 0, 00:10:25.766 "data_size": 65536 00:10:25.766 }, 00:10:25.766 { 00:10:25.766 "name": null, 00:10:25.766 "uuid": "70f5fb91-ef7c-47d7-9db7-9d0829e76ab5", 00:10:25.766 "is_configured": false, 00:10:25.766 "data_offset": 0, 00:10:25.766 "data_size": 65536 00:10:25.766 }, 00:10:25.766 { 00:10:25.766 "name": "BaseBdev3", 00:10:25.766 "uuid": "4230839c-9c4c-45d1-9b4a-5d58755bbb05", 00:10:25.766 "is_configured": true, 00:10:25.766 "data_offset": 0, 00:10:25.766 "data_size": 65536 00:10:25.766 }, 00:10:25.766 { 00:10:25.766 "name": "BaseBdev4", 00:10:25.766 "uuid": "60925f5c-d2be-45fc-94c7-2d7516641493", 00:10:25.766 "is_configured": true, 00:10:25.766 "data_offset": 0, 00:10:25.766 "data_size": 65536 00:10:25.766 } 00:10:25.766 ] 00:10:25.766 }' 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.766 10:55:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.333 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.333 10:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.333 10:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.333 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:26.333 10:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.333 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:26.333 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:26.333 10:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.333 10:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.333 [2024-11-15 10:55:33.141773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:26.333 10:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.333 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:26.333 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.333 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.333 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.333 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.333 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.333 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.333 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.333 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.333 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.333 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.333 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.333 10:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.333 10:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.333 10:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.333 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.333 "name": "Existed_Raid", 00:10:26.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.333 "strip_size_kb": 64, 00:10:26.333 "state": "configuring", 00:10:26.333 "raid_level": "raid0", 00:10:26.333 "superblock": false, 00:10:26.333 "num_base_bdevs": 4, 00:10:26.333 "num_base_bdevs_discovered": 2, 00:10:26.333 "num_base_bdevs_operational": 4, 00:10:26.333 "base_bdevs_list": [ 00:10:26.333 { 00:10:26.333 "name": "BaseBdev1", 00:10:26.333 "uuid": "3ba03ec0-2c59-4db3-b558-0c24d0344881", 00:10:26.333 "is_configured": true, 00:10:26.333 "data_offset": 0, 00:10:26.333 "data_size": 65536 00:10:26.333 }, 00:10:26.333 { 00:10:26.333 "name": null, 00:10:26.333 "uuid": "70f5fb91-ef7c-47d7-9db7-9d0829e76ab5", 00:10:26.333 "is_configured": false, 00:10:26.333 "data_offset": 0, 00:10:26.333 "data_size": 65536 00:10:26.333 }, 00:10:26.333 { 00:10:26.333 "name": null, 00:10:26.333 "uuid": "4230839c-9c4c-45d1-9b4a-5d58755bbb05", 00:10:26.333 "is_configured": false, 00:10:26.333 "data_offset": 0, 00:10:26.333 "data_size": 65536 00:10:26.333 }, 00:10:26.333 { 00:10:26.333 "name": "BaseBdev4", 00:10:26.333 "uuid": "60925f5c-d2be-45fc-94c7-2d7516641493", 00:10:26.333 "is_configured": true, 00:10:26.333 "data_offset": 0, 00:10:26.333 "data_size": 65536 00:10:26.333 } 00:10:26.333 ] 00:10:26.333 }' 00:10:26.333 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.333 10:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.899 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.899 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:26.899 10:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.899 10:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.899 10:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.899 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:26.899 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:26.899 10:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.899 10:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.899 [2024-11-15 10:55:33.632905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:26.899 10:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.899 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:26.899 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.899 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.899 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.899 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.899 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.899 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.899 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.899 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.899 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.899 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.899 10:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.899 10:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.899 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.899 10:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.899 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.899 "name": "Existed_Raid", 00:10:26.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.899 "strip_size_kb": 64, 00:10:26.899 "state": "configuring", 00:10:26.899 "raid_level": "raid0", 00:10:26.899 "superblock": false, 00:10:26.899 "num_base_bdevs": 4, 00:10:26.899 "num_base_bdevs_discovered": 3, 00:10:26.899 "num_base_bdevs_operational": 4, 00:10:26.899 "base_bdevs_list": [ 00:10:26.899 { 00:10:26.899 "name": "BaseBdev1", 00:10:26.899 "uuid": "3ba03ec0-2c59-4db3-b558-0c24d0344881", 00:10:26.899 "is_configured": true, 00:10:26.899 "data_offset": 0, 00:10:26.899 "data_size": 65536 00:10:26.899 }, 00:10:26.899 { 00:10:26.899 "name": null, 00:10:26.899 "uuid": "70f5fb91-ef7c-47d7-9db7-9d0829e76ab5", 00:10:26.899 "is_configured": false, 00:10:26.899 "data_offset": 0, 00:10:26.899 "data_size": 65536 00:10:26.899 }, 00:10:26.899 { 00:10:26.899 "name": "BaseBdev3", 00:10:26.899 "uuid": "4230839c-9c4c-45d1-9b4a-5d58755bbb05", 00:10:26.899 "is_configured": true, 00:10:26.899 "data_offset": 0, 00:10:26.899 "data_size": 65536 00:10:26.899 }, 00:10:26.899 { 00:10:26.899 "name": "BaseBdev4", 00:10:26.899 "uuid": "60925f5c-d2be-45fc-94c7-2d7516641493", 00:10:26.899 "is_configured": true, 00:10:26.899 "data_offset": 0, 00:10:26.899 "data_size": 65536 00:10:26.899 } 00:10:26.899 ] 00:10:26.899 }' 00:10:26.899 10:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.899 10:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.466 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:27.466 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.466 10:55:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.466 10:55:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.466 10:55:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.466 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:27.466 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:27.466 10:55:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.466 10:55:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.466 [2024-11-15 10:55:34.196036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:27.466 10:55:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.466 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:27.466 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.466 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.466 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.466 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.466 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.466 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.466 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.466 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.466 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.466 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.466 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.466 10:55:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.466 10:55:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.466 10:55:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.466 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.466 "name": "Existed_Raid", 00:10:27.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.466 "strip_size_kb": 64, 00:10:27.466 "state": "configuring", 00:10:27.466 "raid_level": "raid0", 00:10:27.466 "superblock": false, 00:10:27.466 "num_base_bdevs": 4, 00:10:27.466 "num_base_bdevs_discovered": 2, 00:10:27.466 "num_base_bdevs_operational": 4, 00:10:27.466 "base_bdevs_list": [ 00:10:27.466 { 00:10:27.466 "name": null, 00:10:27.466 "uuid": "3ba03ec0-2c59-4db3-b558-0c24d0344881", 00:10:27.466 "is_configured": false, 00:10:27.466 "data_offset": 0, 00:10:27.466 "data_size": 65536 00:10:27.466 }, 00:10:27.466 { 00:10:27.466 "name": null, 00:10:27.466 "uuid": "70f5fb91-ef7c-47d7-9db7-9d0829e76ab5", 00:10:27.466 "is_configured": false, 00:10:27.466 "data_offset": 0, 00:10:27.466 "data_size": 65536 00:10:27.466 }, 00:10:27.466 { 00:10:27.466 "name": "BaseBdev3", 00:10:27.466 "uuid": "4230839c-9c4c-45d1-9b4a-5d58755bbb05", 00:10:27.466 "is_configured": true, 00:10:27.466 "data_offset": 0, 00:10:27.466 "data_size": 65536 00:10:27.466 }, 00:10:27.466 { 00:10:27.466 "name": "BaseBdev4", 00:10:27.466 "uuid": "60925f5c-d2be-45fc-94c7-2d7516641493", 00:10:27.466 "is_configured": true, 00:10:27.466 "data_offset": 0, 00:10:27.466 "data_size": 65536 00:10:27.466 } 00:10:27.466 ] 00:10:27.466 }' 00:10:27.466 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.466 10:55:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.031 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.031 10:55:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.031 10:55:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.031 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:28.031 10:55:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.031 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:28.031 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:28.031 10:55:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.031 10:55:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.031 [2024-11-15 10:55:34.841134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:28.031 10:55:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.031 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:28.031 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.031 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.031 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.031 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.031 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.031 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.031 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.031 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.031 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.031 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.031 10:55:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.031 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.031 10:55:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.031 10:55:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.031 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.031 "name": "Existed_Raid", 00:10:28.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.031 "strip_size_kb": 64, 00:10:28.031 "state": "configuring", 00:10:28.031 "raid_level": "raid0", 00:10:28.031 "superblock": false, 00:10:28.031 "num_base_bdevs": 4, 00:10:28.031 "num_base_bdevs_discovered": 3, 00:10:28.031 "num_base_bdevs_operational": 4, 00:10:28.031 "base_bdevs_list": [ 00:10:28.031 { 00:10:28.031 "name": null, 00:10:28.031 "uuid": "3ba03ec0-2c59-4db3-b558-0c24d0344881", 00:10:28.031 "is_configured": false, 00:10:28.031 "data_offset": 0, 00:10:28.031 "data_size": 65536 00:10:28.031 }, 00:10:28.031 { 00:10:28.031 "name": "BaseBdev2", 00:10:28.031 "uuid": "70f5fb91-ef7c-47d7-9db7-9d0829e76ab5", 00:10:28.031 "is_configured": true, 00:10:28.031 "data_offset": 0, 00:10:28.031 "data_size": 65536 00:10:28.031 }, 00:10:28.031 { 00:10:28.031 "name": "BaseBdev3", 00:10:28.031 "uuid": "4230839c-9c4c-45d1-9b4a-5d58755bbb05", 00:10:28.031 "is_configured": true, 00:10:28.031 "data_offset": 0, 00:10:28.031 "data_size": 65536 00:10:28.031 }, 00:10:28.031 { 00:10:28.031 "name": "BaseBdev4", 00:10:28.031 "uuid": "60925f5c-d2be-45fc-94c7-2d7516641493", 00:10:28.031 "is_configured": true, 00:10:28.031 "data_offset": 0, 00:10:28.031 "data_size": 65536 00:10:28.031 } 00:10:28.031 ] 00:10:28.031 }' 00:10:28.031 10:55:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.031 10:55:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3ba03ec0-2c59-4db3-b558-0c24d0344881 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.598 [2024-11-15 10:55:35.405662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:28.598 [2024-11-15 10:55:35.405803] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:28.598 [2024-11-15 10:55:35.405831] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:28.598 [2024-11-15 10:55:35.406142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:28.598 [2024-11-15 10:55:35.406355] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:28.598 [2024-11-15 10:55:35.406406] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:28.598 [2024-11-15 10:55:35.406712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.598 NewBaseBdev 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.598 [ 00:10:28.598 { 00:10:28.598 "name": "NewBaseBdev", 00:10:28.598 "aliases": [ 00:10:28.598 "3ba03ec0-2c59-4db3-b558-0c24d0344881" 00:10:28.598 ], 00:10:28.598 "product_name": "Malloc disk", 00:10:28.598 "block_size": 512, 00:10:28.598 "num_blocks": 65536, 00:10:28.598 "uuid": "3ba03ec0-2c59-4db3-b558-0c24d0344881", 00:10:28.598 "assigned_rate_limits": { 00:10:28.598 "rw_ios_per_sec": 0, 00:10:28.598 "rw_mbytes_per_sec": 0, 00:10:28.598 "r_mbytes_per_sec": 0, 00:10:28.598 "w_mbytes_per_sec": 0 00:10:28.598 }, 00:10:28.598 "claimed": true, 00:10:28.598 "claim_type": "exclusive_write", 00:10:28.598 "zoned": false, 00:10:28.598 "supported_io_types": { 00:10:28.598 "read": true, 00:10:28.598 "write": true, 00:10:28.598 "unmap": true, 00:10:28.598 "flush": true, 00:10:28.598 "reset": true, 00:10:28.598 "nvme_admin": false, 00:10:28.598 "nvme_io": false, 00:10:28.598 "nvme_io_md": false, 00:10:28.598 "write_zeroes": true, 00:10:28.598 "zcopy": true, 00:10:28.598 "get_zone_info": false, 00:10:28.598 "zone_management": false, 00:10:28.598 "zone_append": false, 00:10:28.598 "compare": false, 00:10:28.598 "compare_and_write": false, 00:10:28.598 "abort": true, 00:10:28.598 "seek_hole": false, 00:10:28.598 "seek_data": false, 00:10:28.598 "copy": true, 00:10:28.598 "nvme_iov_md": false 00:10:28.598 }, 00:10:28.598 "memory_domains": [ 00:10:28.598 { 00:10:28.598 "dma_device_id": "system", 00:10:28.598 "dma_device_type": 1 00:10:28.598 }, 00:10:28.598 { 00:10:28.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.598 "dma_device_type": 2 00:10:28.598 } 00:10:28.598 ], 00:10:28.598 "driver_specific": {} 00:10:28.598 } 00:10:28.598 ] 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.598 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.598 "name": "Existed_Raid", 00:10:28.598 "uuid": "908922c6-289d-4a9f-8e95-354745686c4f", 00:10:28.598 "strip_size_kb": 64, 00:10:28.598 "state": "online", 00:10:28.598 "raid_level": "raid0", 00:10:28.598 "superblock": false, 00:10:28.598 "num_base_bdevs": 4, 00:10:28.598 "num_base_bdevs_discovered": 4, 00:10:28.598 "num_base_bdevs_operational": 4, 00:10:28.598 "base_bdevs_list": [ 00:10:28.598 { 00:10:28.598 "name": "NewBaseBdev", 00:10:28.598 "uuid": "3ba03ec0-2c59-4db3-b558-0c24d0344881", 00:10:28.598 "is_configured": true, 00:10:28.598 "data_offset": 0, 00:10:28.598 "data_size": 65536 00:10:28.598 }, 00:10:28.598 { 00:10:28.598 "name": "BaseBdev2", 00:10:28.599 "uuid": "70f5fb91-ef7c-47d7-9db7-9d0829e76ab5", 00:10:28.599 "is_configured": true, 00:10:28.599 "data_offset": 0, 00:10:28.599 "data_size": 65536 00:10:28.599 }, 00:10:28.599 { 00:10:28.599 "name": "BaseBdev3", 00:10:28.599 "uuid": "4230839c-9c4c-45d1-9b4a-5d58755bbb05", 00:10:28.599 "is_configured": true, 00:10:28.599 "data_offset": 0, 00:10:28.599 "data_size": 65536 00:10:28.599 }, 00:10:28.599 { 00:10:28.599 "name": "BaseBdev4", 00:10:28.599 "uuid": "60925f5c-d2be-45fc-94c7-2d7516641493", 00:10:28.599 "is_configured": true, 00:10:28.599 "data_offset": 0, 00:10:28.599 "data_size": 65536 00:10:28.599 } 00:10:28.599 ] 00:10:28.599 }' 00:10:28.599 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.599 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.165 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:29.165 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:29.165 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:29.165 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:29.165 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:29.165 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:29.165 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:29.165 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:29.165 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.165 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.165 [2024-11-15 10:55:35.897294] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:29.165 10:55:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.165 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:29.165 "name": "Existed_Raid", 00:10:29.165 "aliases": [ 00:10:29.165 "908922c6-289d-4a9f-8e95-354745686c4f" 00:10:29.165 ], 00:10:29.165 "product_name": "Raid Volume", 00:10:29.165 "block_size": 512, 00:10:29.165 "num_blocks": 262144, 00:10:29.165 "uuid": "908922c6-289d-4a9f-8e95-354745686c4f", 00:10:29.165 "assigned_rate_limits": { 00:10:29.165 "rw_ios_per_sec": 0, 00:10:29.165 "rw_mbytes_per_sec": 0, 00:10:29.165 "r_mbytes_per_sec": 0, 00:10:29.165 "w_mbytes_per_sec": 0 00:10:29.165 }, 00:10:29.165 "claimed": false, 00:10:29.165 "zoned": false, 00:10:29.165 "supported_io_types": { 00:10:29.165 "read": true, 00:10:29.165 "write": true, 00:10:29.165 "unmap": true, 00:10:29.165 "flush": true, 00:10:29.165 "reset": true, 00:10:29.165 "nvme_admin": false, 00:10:29.165 "nvme_io": false, 00:10:29.166 "nvme_io_md": false, 00:10:29.166 "write_zeroes": true, 00:10:29.166 "zcopy": false, 00:10:29.166 "get_zone_info": false, 00:10:29.166 "zone_management": false, 00:10:29.166 "zone_append": false, 00:10:29.166 "compare": false, 00:10:29.166 "compare_and_write": false, 00:10:29.166 "abort": false, 00:10:29.166 "seek_hole": false, 00:10:29.166 "seek_data": false, 00:10:29.166 "copy": false, 00:10:29.166 "nvme_iov_md": false 00:10:29.166 }, 00:10:29.166 "memory_domains": [ 00:10:29.166 { 00:10:29.166 "dma_device_id": "system", 00:10:29.166 "dma_device_type": 1 00:10:29.166 }, 00:10:29.166 { 00:10:29.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.166 "dma_device_type": 2 00:10:29.166 }, 00:10:29.166 { 00:10:29.166 "dma_device_id": "system", 00:10:29.166 "dma_device_type": 1 00:10:29.166 }, 00:10:29.166 { 00:10:29.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.166 "dma_device_type": 2 00:10:29.166 }, 00:10:29.166 { 00:10:29.166 "dma_device_id": "system", 00:10:29.166 "dma_device_type": 1 00:10:29.166 }, 00:10:29.166 { 00:10:29.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.166 "dma_device_type": 2 00:10:29.166 }, 00:10:29.166 { 00:10:29.166 "dma_device_id": "system", 00:10:29.166 "dma_device_type": 1 00:10:29.166 }, 00:10:29.166 { 00:10:29.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.166 "dma_device_type": 2 00:10:29.166 } 00:10:29.166 ], 00:10:29.166 "driver_specific": { 00:10:29.166 "raid": { 00:10:29.166 "uuid": "908922c6-289d-4a9f-8e95-354745686c4f", 00:10:29.166 "strip_size_kb": 64, 00:10:29.166 "state": "online", 00:10:29.166 "raid_level": "raid0", 00:10:29.166 "superblock": false, 00:10:29.166 "num_base_bdevs": 4, 00:10:29.166 "num_base_bdevs_discovered": 4, 00:10:29.166 "num_base_bdevs_operational": 4, 00:10:29.166 "base_bdevs_list": [ 00:10:29.166 { 00:10:29.166 "name": "NewBaseBdev", 00:10:29.166 "uuid": "3ba03ec0-2c59-4db3-b558-0c24d0344881", 00:10:29.166 "is_configured": true, 00:10:29.166 "data_offset": 0, 00:10:29.166 "data_size": 65536 00:10:29.166 }, 00:10:29.166 { 00:10:29.166 "name": "BaseBdev2", 00:10:29.166 "uuid": "70f5fb91-ef7c-47d7-9db7-9d0829e76ab5", 00:10:29.166 "is_configured": true, 00:10:29.166 "data_offset": 0, 00:10:29.166 "data_size": 65536 00:10:29.166 }, 00:10:29.166 { 00:10:29.166 "name": "BaseBdev3", 00:10:29.166 "uuid": "4230839c-9c4c-45d1-9b4a-5d58755bbb05", 00:10:29.166 "is_configured": true, 00:10:29.166 "data_offset": 0, 00:10:29.166 "data_size": 65536 00:10:29.166 }, 00:10:29.166 { 00:10:29.166 "name": "BaseBdev4", 00:10:29.166 "uuid": "60925f5c-d2be-45fc-94c7-2d7516641493", 00:10:29.166 "is_configured": true, 00:10:29.166 "data_offset": 0, 00:10:29.166 "data_size": 65536 00:10:29.166 } 00:10:29.166 ] 00:10:29.166 } 00:10:29.166 } 00:10:29.166 }' 00:10:29.166 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:29.166 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:29.166 BaseBdev2 00:10:29.166 BaseBdev3 00:10:29.166 BaseBdev4' 00:10:29.166 10:55:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.166 10:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:29.166 10:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.166 10:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:29.166 10:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.166 10:55:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.166 10:55:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.166 10:55:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.166 10:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.166 10:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.166 10:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.166 10:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:29.166 10:55:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.166 10:55:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.166 10:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.425 [2024-11-15 10:55:36.180456] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:29.425 [2024-11-15 10:55:36.180488] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:29.425 [2024-11-15 10:55:36.180581] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.425 [2024-11-15 10:55:36.180652] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:29.425 [2024-11-15 10:55:36.180665] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69525 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 69525 ']' 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 69525 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69525 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69525' 00:10:29.425 killing process with pid 69525 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 69525 00:10:29.425 [2024-11-15 10:55:36.226408] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:29.425 10:55:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 69525 00:10:29.991 [2024-11-15 10:55:36.658291] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:30.925 10:55:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:30.925 00:10:30.925 real 0m11.682s 00:10:30.925 user 0m18.577s 00:10:30.925 sys 0m1.992s 00:10:30.925 10:55:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:30.925 10:55:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.925 ************************************ 00:10:30.925 END TEST raid_state_function_test 00:10:30.925 ************************************ 00:10:31.183 10:55:37 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:31.183 10:55:37 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:31.183 10:55:37 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:31.183 10:55:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:31.183 ************************************ 00:10:31.183 START TEST raid_state_function_test_sb 00:10:31.183 ************************************ 00:10:31.183 10:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 true 00:10:31.183 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:31.183 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:31.183 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:31.183 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:31.183 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:31.183 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70197 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70197' 00:10:31.184 Process raid pid: 70197 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70197 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 70197 ']' 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:31.184 10:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.184 [2024-11-15 10:55:37.983889] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:10:31.184 [2024-11-15 10:55:37.984129] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.442 [2024-11-15 10:55:38.143681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.442 [2024-11-15 10:55:38.275142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.700 [2024-11-15 10:55:38.485806] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.700 [2024-11-15 10:55:38.485933] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.959 10:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:31.959 10:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:10:31.959 10:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:31.959 10:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.959 10:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.959 [2024-11-15 10:55:38.865548] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:31.959 [2024-11-15 10:55:38.865601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:31.959 [2024-11-15 10:55:38.865614] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:31.959 [2024-11-15 10:55:38.865625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:31.959 [2024-11-15 10:55:38.865632] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:31.959 [2024-11-15 10:55:38.865642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:31.959 [2024-11-15 10:55:38.865649] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:31.959 [2024-11-15 10:55:38.865659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:31.959 10:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.959 10:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:31.959 10:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.959 10:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.959 10:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.959 10:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.959 10:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.959 10:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.959 10:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.959 10:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.959 10:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.959 10:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.959 10:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.959 10:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.959 10:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.229 10:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.229 10:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.229 "name": "Existed_Raid", 00:10:32.229 "uuid": "0e31c373-9bc5-45c8-84af-5ac00085095e", 00:10:32.229 "strip_size_kb": 64, 00:10:32.229 "state": "configuring", 00:10:32.229 "raid_level": "raid0", 00:10:32.229 "superblock": true, 00:10:32.229 "num_base_bdevs": 4, 00:10:32.229 "num_base_bdevs_discovered": 0, 00:10:32.229 "num_base_bdevs_operational": 4, 00:10:32.229 "base_bdevs_list": [ 00:10:32.229 { 00:10:32.229 "name": "BaseBdev1", 00:10:32.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.229 "is_configured": false, 00:10:32.229 "data_offset": 0, 00:10:32.229 "data_size": 0 00:10:32.229 }, 00:10:32.229 { 00:10:32.229 "name": "BaseBdev2", 00:10:32.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.229 "is_configured": false, 00:10:32.229 "data_offset": 0, 00:10:32.229 "data_size": 0 00:10:32.229 }, 00:10:32.229 { 00:10:32.229 "name": "BaseBdev3", 00:10:32.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.229 "is_configured": false, 00:10:32.229 "data_offset": 0, 00:10:32.229 "data_size": 0 00:10:32.229 }, 00:10:32.229 { 00:10:32.229 "name": "BaseBdev4", 00:10:32.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.229 "is_configured": false, 00:10:32.229 "data_offset": 0, 00:10:32.229 "data_size": 0 00:10:32.229 } 00:10:32.229 ] 00:10:32.229 }' 00:10:32.229 10:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.229 10:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.488 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:32.488 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.488 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.488 [2024-11-15 10:55:39.288780] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:32.488 [2024-11-15 10:55:39.288822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:32.488 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.488 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:32.488 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.488 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.488 [2024-11-15 10:55:39.300761] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:32.488 [2024-11-15 10:55:39.300809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:32.488 [2024-11-15 10:55:39.300818] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:32.488 [2024-11-15 10:55:39.300844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:32.488 [2024-11-15 10:55:39.300851] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:32.488 [2024-11-15 10:55:39.300861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:32.488 [2024-11-15 10:55:39.300868] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:32.488 [2024-11-15 10:55:39.300877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:32.488 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.488 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:32.488 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.488 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.488 [2024-11-15 10:55:39.348437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.488 BaseBdev1 00:10:32.488 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.488 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:32.488 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:32.488 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:32.488 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:32.488 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:32.488 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:32.488 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:32.488 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.488 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.488 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.488 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:32.488 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.488 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.488 [ 00:10:32.488 { 00:10:32.488 "name": "BaseBdev1", 00:10:32.488 "aliases": [ 00:10:32.488 "25d77340-8689-4cd0-9d79-cbfffaeeff28" 00:10:32.488 ], 00:10:32.488 "product_name": "Malloc disk", 00:10:32.488 "block_size": 512, 00:10:32.488 "num_blocks": 65536, 00:10:32.488 "uuid": "25d77340-8689-4cd0-9d79-cbfffaeeff28", 00:10:32.488 "assigned_rate_limits": { 00:10:32.488 "rw_ios_per_sec": 0, 00:10:32.488 "rw_mbytes_per_sec": 0, 00:10:32.488 "r_mbytes_per_sec": 0, 00:10:32.488 "w_mbytes_per_sec": 0 00:10:32.488 }, 00:10:32.488 "claimed": true, 00:10:32.488 "claim_type": "exclusive_write", 00:10:32.488 "zoned": false, 00:10:32.488 "supported_io_types": { 00:10:32.488 "read": true, 00:10:32.488 "write": true, 00:10:32.488 "unmap": true, 00:10:32.488 "flush": true, 00:10:32.488 "reset": true, 00:10:32.488 "nvme_admin": false, 00:10:32.488 "nvme_io": false, 00:10:32.488 "nvme_io_md": false, 00:10:32.488 "write_zeroes": true, 00:10:32.488 "zcopy": true, 00:10:32.488 "get_zone_info": false, 00:10:32.488 "zone_management": false, 00:10:32.488 "zone_append": false, 00:10:32.488 "compare": false, 00:10:32.488 "compare_and_write": false, 00:10:32.488 "abort": true, 00:10:32.488 "seek_hole": false, 00:10:32.488 "seek_data": false, 00:10:32.488 "copy": true, 00:10:32.488 "nvme_iov_md": false 00:10:32.488 }, 00:10:32.488 "memory_domains": [ 00:10:32.488 { 00:10:32.488 "dma_device_id": "system", 00:10:32.488 "dma_device_type": 1 00:10:32.488 }, 00:10:32.488 { 00:10:32.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.489 "dma_device_type": 2 00:10:32.489 } 00:10:32.489 ], 00:10:32.489 "driver_specific": {} 00:10:32.489 } 00:10:32.489 ] 00:10:32.489 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.489 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:32.489 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:32.489 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.489 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.489 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.489 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.489 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.489 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.489 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.489 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.489 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.489 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.489 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.489 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.489 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.747 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.747 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.747 "name": "Existed_Raid", 00:10:32.747 "uuid": "ad212736-0caf-4e83-b2c4-8262abd3a5e4", 00:10:32.747 "strip_size_kb": 64, 00:10:32.747 "state": "configuring", 00:10:32.747 "raid_level": "raid0", 00:10:32.747 "superblock": true, 00:10:32.747 "num_base_bdevs": 4, 00:10:32.747 "num_base_bdevs_discovered": 1, 00:10:32.747 "num_base_bdevs_operational": 4, 00:10:32.747 "base_bdevs_list": [ 00:10:32.747 { 00:10:32.747 "name": "BaseBdev1", 00:10:32.747 "uuid": "25d77340-8689-4cd0-9d79-cbfffaeeff28", 00:10:32.747 "is_configured": true, 00:10:32.747 "data_offset": 2048, 00:10:32.747 "data_size": 63488 00:10:32.747 }, 00:10:32.747 { 00:10:32.747 "name": "BaseBdev2", 00:10:32.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.747 "is_configured": false, 00:10:32.747 "data_offset": 0, 00:10:32.747 "data_size": 0 00:10:32.747 }, 00:10:32.747 { 00:10:32.747 "name": "BaseBdev3", 00:10:32.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.747 "is_configured": false, 00:10:32.747 "data_offset": 0, 00:10:32.747 "data_size": 0 00:10:32.748 }, 00:10:32.748 { 00:10:32.748 "name": "BaseBdev4", 00:10:32.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.748 "is_configured": false, 00:10:32.748 "data_offset": 0, 00:10:32.748 "data_size": 0 00:10:32.748 } 00:10:32.748 ] 00:10:32.748 }' 00:10:32.748 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.748 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.006 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:33.006 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.006 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.006 [2024-11-15 10:55:39.863645] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:33.006 [2024-11-15 10:55:39.863708] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:33.006 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.006 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:33.006 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.006 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.006 [2024-11-15 10:55:39.875702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:33.006 [2024-11-15 10:55:39.877708] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:33.006 [2024-11-15 10:55:39.877816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:33.006 [2024-11-15 10:55:39.877848] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:33.006 [2024-11-15 10:55:39.877861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:33.006 [2024-11-15 10:55:39.877869] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:33.006 [2024-11-15 10:55:39.877878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:33.006 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.006 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:33.007 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:33.007 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:33.007 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.007 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.007 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.007 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.007 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.007 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.007 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.007 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.007 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.007 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.007 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.007 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.007 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.007 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.265 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.265 "name": "Existed_Raid", 00:10:33.265 "uuid": "a853fa61-44f4-4912-9c38-85bc3b39ab57", 00:10:33.265 "strip_size_kb": 64, 00:10:33.265 "state": "configuring", 00:10:33.265 "raid_level": "raid0", 00:10:33.265 "superblock": true, 00:10:33.265 "num_base_bdevs": 4, 00:10:33.265 "num_base_bdevs_discovered": 1, 00:10:33.265 "num_base_bdevs_operational": 4, 00:10:33.265 "base_bdevs_list": [ 00:10:33.265 { 00:10:33.265 "name": "BaseBdev1", 00:10:33.265 "uuid": "25d77340-8689-4cd0-9d79-cbfffaeeff28", 00:10:33.265 "is_configured": true, 00:10:33.265 "data_offset": 2048, 00:10:33.265 "data_size": 63488 00:10:33.265 }, 00:10:33.265 { 00:10:33.265 "name": "BaseBdev2", 00:10:33.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.265 "is_configured": false, 00:10:33.265 "data_offset": 0, 00:10:33.265 "data_size": 0 00:10:33.265 }, 00:10:33.265 { 00:10:33.265 "name": "BaseBdev3", 00:10:33.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.265 "is_configured": false, 00:10:33.265 "data_offset": 0, 00:10:33.265 "data_size": 0 00:10:33.265 }, 00:10:33.265 { 00:10:33.265 "name": "BaseBdev4", 00:10:33.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.265 "is_configured": false, 00:10:33.265 "data_offset": 0, 00:10:33.265 "data_size": 0 00:10:33.265 } 00:10:33.265 ] 00:10:33.265 }' 00:10:33.265 10:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.265 10:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.524 [2024-11-15 10:55:40.367054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:33.524 BaseBdev2 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.524 [ 00:10:33.524 { 00:10:33.524 "name": "BaseBdev2", 00:10:33.524 "aliases": [ 00:10:33.524 "03756e21-caf6-4c15-934c-4748e36f1e6c" 00:10:33.524 ], 00:10:33.524 "product_name": "Malloc disk", 00:10:33.524 "block_size": 512, 00:10:33.524 "num_blocks": 65536, 00:10:33.524 "uuid": "03756e21-caf6-4c15-934c-4748e36f1e6c", 00:10:33.524 "assigned_rate_limits": { 00:10:33.524 "rw_ios_per_sec": 0, 00:10:33.524 "rw_mbytes_per_sec": 0, 00:10:33.524 "r_mbytes_per_sec": 0, 00:10:33.524 "w_mbytes_per_sec": 0 00:10:33.524 }, 00:10:33.524 "claimed": true, 00:10:33.524 "claim_type": "exclusive_write", 00:10:33.524 "zoned": false, 00:10:33.524 "supported_io_types": { 00:10:33.524 "read": true, 00:10:33.524 "write": true, 00:10:33.524 "unmap": true, 00:10:33.524 "flush": true, 00:10:33.524 "reset": true, 00:10:33.524 "nvme_admin": false, 00:10:33.524 "nvme_io": false, 00:10:33.524 "nvme_io_md": false, 00:10:33.524 "write_zeroes": true, 00:10:33.524 "zcopy": true, 00:10:33.524 "get_zone_info": false, 00:10:33.524 "zone_management": false, 00:10:33.524 "zone_append": false, 00:10:33.524 "compare": false, 00:10:33.524 "compare_and_write": false, 00:10:33.524 "abort": true, 00:10:33.524 "seek_hole": false, 00:10:33.524 "seek_data": false, 00:10:33.524 "copy": true, 00:10:33.524 "nvme_iov_md": false 00:10:33.524 }, 00:10:33.524 "memory_domains": [ 00:10:33.524 { 00:10:33.524 "dma_device_id": "system", 00:10:33.524 "dma_device_type": 1 00:10:33.524 }, 00:10:33.524 { 00:10:33.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.524 "dma_device_type": 2 00:10:33.524 } 00:10:33.524 ], 00:10:33.524 "driver_specific": {} 00:10:33.524 } 00:10:33.524 ] 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.524 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.783 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.783 "name": "Existed_Raid", 00:10:33.783 "uuid": "a853fa61-44f4-4912-9c38-85bc3b39ab57", 00:10:33.783 "strip_size_kb": 64, 00:10:33.783 "state": "configuring", 00:10:33.783 "raid_level": "raid0", 00:10:33.783 "superblock": true, 00:10:33.783 "num_base_bdevs": 4, 00:10:33.783 "num_base_bdevs_discovered": 2, 00:10:33.783 "num_base_bdevs_operational": 4, 00:10:33.783 "base_bdevs_list": [ 00:10:33.783 { 00:10:33.783 "name": "BaseBdev1", 00:10:33.783 "uuid": "25d77340-8689-4cd0-9d79-cbfffaeeff28", 00:10:33.783 "is_configured": true, 00:10:33.783 "data_offset": 2048, 00:10:33.783 "data_size": 63488 00:10:33.783 }, 00:10:33.783 { 00:10:33.783 "name": "BaseBdev2", 00:10:33.783 "uuid": "03756e21-caf6-4c15-934c-4748e36f1e6c", 00:10:33.783 "is_configured": true, 00:10:33.783 "data_offset": 2048, 00:10:33.783 "data_size": 63488 00:10:33.783 }, 00:10:33.783 { 00:10:33.783 "name": "BaseBdev3", 00:10:33.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.783 "is_configured": false, 00:10:33.783 "data_offset": 0, 00:10:33.783 "data_size": 0 00:10:33.783 }, 00:10:33.783 { 00:10:33.783 "name": "BaseBdev4", 00:10:33.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.783 "is_configured": false, 00:10:33.783 "data_offset": 0, 00:10:33.783 "data_size": 0 00:10:33.783 } 00:10:33.783 ] 00:10:33.783 }' 00:10:33.783 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.783 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.041 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:34.041 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.041 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.041 [2024-11-15 10:55:40.896992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:34.041 BaseBdev3 00:10:34.041 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.041 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:34.041 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:34.041 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:34.041 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:34.041 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:34.041 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:34.041 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:34.041 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.041 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.041 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.041 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:34.041 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.041 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.041 [ 00:10:34.041 { 00:10:34.041 "name": "BaseBdev3", 00:10:34.041 "aliases": [ 00:10:34.041 "9bbd5e3c-1e82-4163-a1e0-a91fdb1a9c92" 00:10:34.041 ], 00:10:34.041 "product_name": "Malloc disk", 00:10:34.041 "block_size": 512, 00:10:34.041 "num_blocks": 65536, 00:10:34.041 "uuid": "9bbd5e3c-1e82-4163-a1e0-a91fdb1a9c92", 00:10:34.041 "assigned_rate_limits": { 00:10:34.041 "rw_ios_per_sec": 0, 00:10:34.041 "rw_mbytes_per_sec": 0, 00:10:34.041 "r_mbytes_per_sec": 0, 00:10:34.041 "w_mbytes_per_sec": 0 00:10:34.041 }, 00:10:34.041 "claimed": true, 00:10:34.041 "claim_type": "exclusive_write", 00:10:34.041 "zoned": false, 00:10:34.041 "supported_io_types": { 00:10:34.041 "read": true, 00:10:34.041 "write": true, 00:10:34.041 "unmap": true, 00:10:34.041 "flush": true, 00:10:34.041 "reset": true, 00:10:34.041 "nvme_admin": false, 00:10:34.041 "nvme_io": false, 00:10:34.041 "nvme_io_md": false, 00:10:34.041 "write_zeroes": true, 00:10:34.041 "zcopy": true, 00:10:34.041 "get_zone_info": false, 00:10:34.041 "zone_management": false, 00:10:34.041 "zone_append": false, 00:10:34.041 "compare": false, 00:10:34.041 "compare_and_write": false, 00:10:34.041 "abort": true, 00:10:34.041 "seek_hole": false, 00:10:34.041 "seek_data": false, 00:10:34.041 "copy": true, 00:10:34.041 "nvme_iov_md": false 00:10:34.041 }, 00:10:34.041 "memory_domains": [ 00:10:34.041 { 00:10:34.041 "dma_device_id": "system", 00:10:34.041 "dma_device_type": 1 00:10:34.041 }, 00:10:34.042 { 00:10:34.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.042 "dma_device_type": 2 00:10:34.042 } 00:10:34.042 ], 00:10:34.042 "driver_specific": {} 00:10:34.042 } 00:10:34.042 ] 00:10:34.042 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.042 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:34.042 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:34.042 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:34.042 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:34.042 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.042 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.042 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.042 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.042 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.042 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.042 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.042 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.042 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.042 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.042 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.042 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.042 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.042 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.300 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.300 "name": "Existed_Raid", 00:10:34.300 "uuid": "a853fa61-44f4-4912-9c38-85bc3b39ab57", 00:10:34.300 "strip_size_kb": 64, 00:10:34.300 "state": "configuring", 00:10:34.300 "raid_level": "raid0", 00:10:34.300 "superblock": true, 00:10:34.300 "num_base_bdevs": 4, 00:10:34.300 "num_base_bdevs_discovered": 3, 00:10:34.300 "num_base_bdevs_operational": 4, 00:10:34.300 "base_bdevs_list": [ 00:10:34.300 { 00:10:34.300 "name": "BaseBdev1", 00:10:34.300 "uuid": "25d77340-8689-4cd0-9d79-cbfffaeeff28", 00:10:34.300 "is_configured": true, 00:10:34.300 "data_offset": 2048, 00:10:34.300 "data_size": 63488 00:10:34.300 }, 00:10:34.300 { 00:10:34.300 "name": "BaseBdev2", 00:10:34.300 "uuid": "03756e21-caf6-4c15-934c-4748e36f1e6c", 00:10:34.300 "is_configured": true, 00:10:34.300 "data_offset": 2048, 00:10:34.300 "data_size": 63488 00:10:34.300 }, 00:10:34.300 { 00:10:34.300 "name": "BaseBdev3", 00:10:34.300 "uuid": "9bbd5e3c-1e82-4163-a1e0-a91fdb1a9c92", 00:10:34.300 "is_configured": true, 00:10:34.300 "data_offset": 2048, 00:10:34.300 "data_size": 63488 00:10:34.300 }, 00:10:34.300 { 00:10:34.300 "name": "BaseBdev4", 00:10:34.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.300 "is_configured": false, 00:10:34.300 "data_offset": 0, 00:10:34.300 "data_size": 0 00:10:34.300 } 00:10:34.300 ] 00:10:34.300 }' 00:10:34.300 10:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.300 10:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.559 [2024-11-15 10:55:41.364538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:34.559 [2024-11-15 10:55:41.364954] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:34.559 [2024-11-15 10:55:41.365014] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:34.559 [2024-11-15 10:55:41.365366] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:34.559 BaseBdev4 00:10:34.559 [2024-11-15 10:55:41.365598] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:34.559 [2024-11-15 10:55:41.365656] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.559 [2024-11-15 10:55:41.365874] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.559 [ 00:10:34.559 { 00:10:34.559 "name": "BaseBdev4", 00:10:34.559 "aliases": [ 00:10:34.559 "fe9d14b5-9f86-4433-86a7-5ab67fea82c5" 00:10:34.559 ], 00:10:34.559 "product_name": "Malloc disk", 00:10:34.559 "block_size": 512, 00:10:34.559 "num_blocks": 65536, 00:10:34.559 "uuid": "fe9d14b5-9f86-4433-86a7-5ab67fea82c5", 00:10:34.559 "assigned_rate_limits": { 00:10:34.559 "rw_ios_per_sec": 0, 00:10:34.559 "rw_mbytes_per_sec": 0, 00:10:34.559 "r_mbytes_per_sec": 0, 00:10:34.559 "w_mbytes_per_sec": 0 00:10:34.559 }, 00:10:34.559 "claimed": true, 00:10:34.559 "claim_type": "exclusive_write", 00:10:34.559 "zoned": false, 00:10:34.559 "supported_io_types": { 00:10:34.559 "read": true, 00:10:34.559 "write": true, 00:10:34.559 "unmap": true, 00:10:34.559 "flush": true, 00:10:34.559 "reset": true, 00:10:34.559 "nvme_admin": false, 00:10:34.559 "nvme_io": false, 00:10:34.559 "nvme_io_md": false, 00:10:34.559 "write_zeroes": true, 00:10:34.559 "zcopy": true, 00:10:34.559 "get_zone_info": false, 00:10:34.559 "zone_management": false, 00:10:34.559 "zone_append": false, 00:10:34.559 "compare": false, 00:10:34.559 "compare_and_write": false, 00:10:34.559 "abort": true, 00:10:34.559 "seek_hole": false, 00:10:34.559 "seek_data": false, 00:10:34.559 "copy": true, 00:10:34.559 "nvme_iov_md": false 00:10:34.559 }, 00:10:34.559 "memory_domains": [ 00:10:34.559 { 00:10:34.559 "dma_device_id": "system", 00:10:34.559 "dma_device_type": 1 00:10:34.559 }, 00:10:34.559 { 00:10:34.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.559 "dma_device_type": 2 00:10:34.559 } 00:10:34.559 ], 00:10:34.559 "driver_specific": {} 00:10:34.559 } 00:10:34.559 ] 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.559 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.559 "name": "Existed_Raid", 00:10:34.559 "uuid": "a853fa61-44f4-4912-9c38-85bc3b39ab57", 00:10:34.559 "strip_size_kb": 64, 00:10:34.559 "state": "online", 00:10:34.559 "raid_level": "raid0", 00:10:34.559 "superblock": true, 00:10:34.559 "num_base_bdevs": 4, 00:10:34.559 "num_base_bdevs_discovered": 4, 00:10:34.559 "num_base_bdevs_operational": 4, 00:10:34.559 "base_bdevs_list": [ 00:10:34.559 { 00:10:34.559 "name": "BaseBdev1", 00:10:34.559 "uuid": "25d77340-8689-4cd0-9d79-cbfffaeeff28", 00:10:34.559 "is_configured": true, 00:10:34.559 "data_offset": 2048, 00:10:34.559 "data_size": 63488 00:10:34.559 }, 00:10:34.559 { 00:10:34.559 "name": "BaseBdev2", 00:10:34.559 "uuid": "03756e21-caf6-4c15-934c-4748e36f1e6c", 00:10:34.559 "is_configured": true, 00:10:34.559 "data_offset": 2048, 00:10:34.559 "data_size": 63488 00:10:34.559 }, 00:10:34.559 { 00:10:34.559 "name": "BaseBdev3", 00:10:34.559 "uuid": "9bbd5e3c-1e82-4163-a1e0-a91fdb1a9c92", 00:10:34.559 "is_configured": true, 00:10:34.559 "data_offset": 2048, 00:10:34.559 "data_size": 63488 00:10:34.559 }, 00:10:34.559 { 00:10:34.559 "name": "BaseBdev4", 00:10:34.559 "uuid": "fe9d14b5-9f86-4433-86a7-5ab67fea82c5", 00:10:34.560 "is_configured": true, 00:10:34.560 "data_offset": 2048, 00:10:34.560 "data_size": 63488 00:10:34.560 } 00:10:34.560 ] 00:10:34.560 }' 00:10:34.560 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.560 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.126 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:35.126 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:35.126 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:35.126 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:35.126 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:35.126 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:35.126 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:35.126 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.126 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.126 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:35.126 [2024-11-15 10:55:41.796451] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:35.126 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.126 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:35.126 "name": "Existed_Raid", 00:10:35.126 "aliases": [ 00:10:35.126 "a853fa61-44f4-4912-9c38-85bc3b39ab57" 00:10:35.126 ], 00:10:35.126 "product_name": "Raid Volume", 00:10:35.126 "block_size": 512, 00:10:35.126 "num_blocks": 253952, 00:10:35.126 "uuid": "a853fa61-44f4-4912-9c38-85bc3b39ab57", 00:10:35.126 "assigned_rate_limits": { 00:10:35.126 "rw_ios_per_sec": 0, 00:10:35.126 "rw_mbytes_per_sec": 0, 00:10:35.126 "r_mbytes_per_sec": 0, 00:10:35.126 "w_mbytes_per_sec": 0 00:10:35.126 }, 00:10:35.126 "claimed": false, 00:10:35.126 "zoned": false, 00:10:35.126 "supported_io_types": { 00:10:35.126 "read": true, 00:10:35.126 "write": true, 00:10:35.126 "unmap": true, 00:10:35.126 "flush": true, 00:10:35.126 "reset": true, 00:10:35.126 "nvme_admin": false, 00:10:35.126 "nvme_io": false, 00:10:35.126 "nvme_io_md": false, 00:10:35.126 "write_zeroes": true, 00:10:35.126 "zcopy": false, 00:10:35.126 "get_zone_info": false, 00:10:35.126 "zone_management": false, 00:10:35.126 "zone_append": false, 00:10:35.126 "compare": false, 00:10:35.126 "compare_and_write": false, 00:10:35.126 "abort": false, 00:10:35.126 "seek_hole": false, 00:10:35.126 "seek_data": false, 00:10:35.126 "copy": false, 00:10:35.126 "nvme_iov_md": false 00:10:35.126 }, 00:10:35.126 "memory_domains": [ 00:10:35.126 { 00:10:35.126 "dma_device_id": "system", 00:10:35.126 "dma_device_type": 1 00:10:35.126 }, 00:10:35.126 { 00:10:35.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.126 "dma_device_type": 2 00:10:35.126 }, 00:10:35.126 { 00:10:35.126 "dma_device_id": "system", 00:10:35.126 "dma_device_type": 1 00:10:35.126 }, 00:10:35.126 { 00:10:35.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.126 "dma_device_type": 2 00:10:35.126 }, 00:10:35.126 { 00:10:35.126 "dma_device_id": "system", 00:10:35.126 "dma_device_type": 1 00:10:35.126 }, 00:10:35.126 { 00:10:35.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.126 "dma_device_type": 2 00:10:35.126 }, 00:10:35.126 { 00:10:35.126 "dma_device_id": "system", 00:10:35.126 "dma_device_type": 1 00:10:35.126 }, 00:10:35.127 { 00:10:35.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.127 "dma_device_type": 2 00:10:35.127 } 00:10:35.127 ], 00:10:35.127 "driver_specific": { 00:10:35.127 "raid": { 00:10:35.127 "uuid": "a853fa61-44f4-4912-9c38-85bc3b39ab57", 00:10:35.127 "strip_size_kb": 64, 00:10:35.127 "state": "online", 00:10:35.127 "raid_level": "raid0", 00:10:35.127 "superblock": true, 00:10:35.127 "num_base_bdevs": 4, 00:10:35.127 "num_base_bdevs_discovered": 4, 00:10:35.127 "num_base_bdevs_operational": 4, 00:10:35.127 "base_bdevs_list": [ 00:10:35.127 { 00:10:35.127 "name": "BaseBdev1", 00:10:35.127 "uuid": "25d77340-8689-4cd0-9d79-cbfffaeeff28", 00:10:35.127 "is_configured": true, 00:10:35.127 "data_offset": 2048, 00:10:35.127 "data_size": 63488 00:10:35.127 }, 00:10:35.127 { 00:10:35.127 "name": "BaseBdev2", 00:10:35.127 "uuid": "03756e21-caf6-4c15-934c-4748e36f1e6c", 00:10:35.127 "is_configured": true, 00:10:35.127 "data_offset": 2048, 00:10:35.127 "data_size": 63488 00:10:35.127 }, 00:10:35.127 { 00:10:35.127 "name": "BaseBdev3", 00:10:35.127 "uuid": "9bbd5e3c-1e82-4163-a1e0-a91fdb1a9c92", 00:10:35.127 "is_configured": true, 00:10:35.127 "data_offset": 2048, 00:10:35.127 "data_size": 63488 00:10:35.127 }, 00:10:35.127 { 00:10:35.127 "name": "BaseBdev4", 00:10:35.127 "uuid": "fe9d14b5-9f86-4433-86a7-5ab67fea82c5", 00:10:35.127 "is_configured": true, 00:10:35.127 "data_offset": 2048, 00:10:35.127 "data_size": 63488 00:10:35.127 } 00:10:35.127 ] 00:10:35.127 } 00:10:35.127 } 00:10:35.127 }' 00:10:35.127 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:35.127 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:35.127 BaseBdev2 00:10:35.127 BaseBdev3 00:10:35.127 BaseBdev4' 00:10:35.127 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.127 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:35.127 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.127 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.127 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:35.127 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.127 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.127 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.127 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.127 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.127 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.127 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:35.127 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.127 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.127 10:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.127 10:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.127 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.127 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.127 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.127 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.127 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:35.127 10:55:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.127 10:55:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.127 10:55:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.385 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.385 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.385 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.385 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.386 [2024-11-15 10:55:42.131595] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:35.386 [2024-11-15 10:55:42.131631] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:35.386 [2024-11-15 10:55:42.131686] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.386 "name": "Existed_Raid", 00:10:35.386 "uuid": "a853fa61-44f4-4912-9c38-85bc3b39ab57", 00:10:35.386 "strip_size_kb": 64, 00:10:35.386 "state": "offline", 00:10:35.386 "raid_level": "raid0", 00:10:35.386 "superblock": true, 00:10:35.386 "num_base_bdevs": 4, 00:10:35.386 "num_base_bdevs_discovered": 3, 00:10:35.386 "num_base_bdevs_operational": 3, 00:10:35.386 "base_bdevs_list": [ 00:10:35.386 { 00:10:35.386 "name": null, 00:10:35.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.386 "is_configured": false, 00:10:35.386 "data_offset": 0, 00:10:35.386 "data_size": 63488 00:10:35.386 }, 00:10:35.386 { 00:10:35.386 "name": "BaseBdev2", 00:10:35.386 "uuid": "03756e21-caf6-4c15-934c-4748e36f1e6c", 00:10:35.386 "is_configured": true, 00:10:35.386 "data_offset": 2048, 00:10:35.386 "data_size": 63488 00:10:35.386 }, 00:10:35.386 { 00:10:35.386 "name": "BaseBdev3", 00:10:35.386 "uuid": "9bbd5e3c-1e82-4163-a1e0-a91fdb1a9c92", 00:10:35.386 "is_configured": true, 00:10:35.386 "data_offset": 2048, 00:10:35.386 "data_size": 63488 00:10:35.386 }, 00:10:35.386 { 00:10:35.386 "name": "BaseBdev4", 00:10:35.386 "uuid": "fe9d14b5-9f86-4433-86a7-5ab67fea82c5", 00:10:35.386 "is_configured": true, 00:10:35.386 "data_offset": 2048, 00:10:35.386 "data_size": 63488 00:10:35.386 } 00:10:35.386 ] 00:10:35.386 }' 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.386 10:55:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.978 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:35.978 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.978 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:35.978 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.978 10:55:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.978 10:55:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.978 10:55:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.978 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:35.978 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:35.978 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:35.978 10:55:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.978 10:55:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.978 [2024-11-15 10:55:42.785002] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:35.978 10:55:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.978 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:35.978 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.978 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.978 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:35.978 10:55:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.978 10:55:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.235 10:55:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.235 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:36.235 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:36.235 10:55:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:36.235 10:55:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.235 10:55:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.235 [2024-11-15 10:55:42.957519] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:36.235 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.235 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:36.235 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:36.235 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:36.235 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.235 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.235 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.235 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.235 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:36.235 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:36.235 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:36.235 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.235 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.235 [2024-11-15 10:55:43.116965] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:36.236 [2024-11-15 10:55:43.117081] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.494 BaseBdev2 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.494 [ 00:10:36.494 { 00:10:36.494 "name": "BaseBdev2", 00:10:36.494 "aliases": [ 00:10:36.494 "a229a4c2-26c0-4b31-8fe1-5c84a8f5efce" 00:10:36.494 ], 00:10:36.494 "product_name": "Malloc disk", 00:10:36.494 "block_size": 512, 00:10:36.494 "num_blocks": 65536, 00:10:36.494 "uuid": "a229a4c2-26c0-4b31-8fe1-5c84a8f5efce", 00:10:36.494 "assigned_rate_limits": { 00:10:36.494 "rw_ios_per_sec": 0, 00:10:36.494 "rw_mbytes_per_sec": 0, 00:10:36.494 "r_mbytes_per_sec": 0, 00:10:36.494 "w_mbytes_per_sec": 0 00:10:36.494 }, 00:10:36.494 "claimed": false, 00:10:36.494 "zoned": false, 00:10:36.494 "supported_io_types": { 00:10:36.494 "read": true, 00:10:36.494 "write": true, 00:10:36.494 "unmap": true, 00:10:36.494 "flush": true, 00:10:36.494 "reset": true, 00:10:36.494 "nvme_admin": false, 00:10:36.494 "nvme_io": false, 00:10:36.494 "nvme_io_md": false, 00:10:36.494 "write_zeroes": true, 00:10:36.494 "zcopy": true, 00:10:36.494 "get_zone_info": false, 00:10:36.494 "zone_management": false, 00:10:36.494 "zone_append": false, 00:10:36.494 "compare": false, 00:10:36.494 "compare_and_write": false, 00:10:36.494 "abort": true, 00:10:36.494 "seek_hole": false, 00:10:36.494 "seek_data": false, 00:10:36.494 "copy": true, 00:10:36.494 "nvme_iov_md": false 00:10:36.494 }, 00:10:36.494 "memory_domains": [ 00:10:36.494 { 00:10:36.494 "dma_device_id": "system", 00:10:36.494 "dma_device_type": 1 00:10:36.494 }, 00:10:36.494 { 00:10:36.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.494 "dma_device_type": 2 00:10:36.494 } 00:10:36.494 ], 00:10:36.494 "driver_specific": {} 00:10:36.494 } 00:10:36.494 ] 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.494 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:36.495 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:36.495 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:36.495 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:36.495 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.495 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.495 BaseBdev3 00:10:36.495 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.495 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:36.495 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:36.495 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:36.495 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:36.495 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:36.495 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:36.495 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:36.495 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.495 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.495 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.495 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:36.495 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.495 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.754 [ 00:10:36.754 { 00:10:36.754 "name": "BaseBdev3", 00:10:36.754 "aliases": [ 00:10:36.754 "161aeae6-ac81-4ef9-9581-f7ef0796c0d2" 00:10:36.754 ], 00:10:36.754 "product_name": "Malloc disk", 00:10:36.754 "block_size": 512, 00:10:36.754 "num_blocks": 65536, 00:10:36.754 "uuid": "161aeae6-ac81-4ef9-9581-f7ef0796c0d2", 00:10:36.754 "assigned_rate_limits": { 00:10:36.754 "rw_ios_per_sec": 0, 00:10:36.754 "rw_mbytes_per_sec": 0, 00:10:36.754 "r_mbytes_per_sec": 0, 00:10:36.754 "w_mbytes_per_sec": 0 00:10:36.754 }, 00:10:36.754 "claimed": false, 00:10:36.754 "zoned": false, 00:10:36.754 "supported_io_types": { 00:10:36.754 "read": true, 00:10:36.754 "write": true, 00:10:36.754 "unmap": true, 00:10:36.754 "flush": true, 00:10:36.754 "reset": true, 00:10:36.754 "nvme_admin": false, 00:10:36.754 "nvme_io": false, 00:10:36.754 "nvme_io_md": false, 00:10:36.754 "write_zeroes": true, 00:10:36.754 "zcopy": true, 00:10:36.754 "get_zone_info": false, 00:10:36.754 "zone_management": false, 00:10:36.754 "zone_append": false, 00:10:36.754 "compare": false, 00:10:36.754 "compare_and_write": false, 00:10:36.754 "abort": true, 00:10:36.754 "seek_hole": false, 00:10:36.754 "seek_data": false, 00:10:36.754 "copy": true, 00:10:36.754 "nvme_iov_md": false 00:10:36.754 }, 00:10:36.754 "memory_domains": [ 00:10:36.754 { 00:10:36.754 "dma_device_id": "system", 00:10:36.754 "dma_device_type": 1 00:10:36.754 }, 00:10:36.754 { 00:10:36.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.754 "dma_device_type": 2 00:10:36.754 } 00:10:36.754 ], 00:10:36.754 "driver_specific": {} 00:10:36.754 } 00:10:36.754 ] 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.754 BaseBdev4 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.754 [ 00:10:36.754 { 00:10:36.754 "name": "BaseBdev4", 00:10:36.754 "aliases": [ 00:10:36.754 "1da3f1ad-9941-434c-ba3f-e621ab1dcc3f" 00:10:36.754 ], 00:10:36.754 "product_name": "Malloc disk", 00:10:36.754 "block_size": 512, 00:10:36.754 "num_blocks": 65536, 00:10:36.754 "uuid": "1da3f1ad-9941-434c-ba3f-e621ab1dcc3f", 00:10:36.754 "assigned_rate_limits": { 00:10:36.754 "rw_ios_per_sec": 0, 00:10:36.754 "rw_mbytes_per_sec": 0, 00:10:36.754 "r_mbytes_per_sec": 0, 00:10:36.754 "w_mbytes_per_sec": 0 00:10:36.754 }, 00:10:36.754 "claimed": false, 00:10:36.754 "zoned": false, 00:10:36.754 "supported_io_types": { 00:10:36.754 "read": true, 00:10:36.754 "write": true, 00:10:36.754 "unmap": true, 00:10:36.754 "flush": true, 00:10:36.754 "reset": true, 00:10:36.754 "nvme_admin": false, 00:10:36.754 "nvme_io": false, 00:10:36.754 "nvme_io_md": false, 00:10:36.754 "write_zeroes": true, 00:10:36.754 "zcopy": true, 00:10:36.754 "get_zone_info": false, 00:10:36.754 "zone_management": false, 00:10:36.754 "zone_append": false, 00:10:36.754 "compare": false, 00:10:36.754 "compare_and_write": false, 00:10:36.754 "abort": true, 00:10:36.754 "seek_hole": false, 00:10:36.754 "seek_data": false, 00:10:36.754 "copy": true, 00:10:36.754 "nvme_iov_md": false 00:10:36.754 }, 00:10:36.754 "memory_domains": [ 00:10:36.754 { 00:10:36.754 "dma_device_id": "system", 00:10:36.754 "dma_device_type": 1 00:10:36.754 }, 00:10:36.754 { 00:10:36.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.754 "dma_device_type": 2 00:10:36.754 } 00:10:36.754 ], 00:10:36.754 "driver_specific": {} 00:10:36.754 } 00:10:36.754 ] 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.754 [2024-11-15 10:55:43.534087] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:36.754 [2024-11-15 10:55:43.534191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:36.754 [2024-11-15 10:55:43.534242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:36.754 [2024-11-15 10:55:43.536283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:36.754 [2024-11-15 10:55:43.536403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.754 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.754 "name": "Existed_Raid", 00:10:36.754 "uuid": "d30e7257-8b74-48b4-abb0-6adec76034cc", 00:10:36.754 "strip_size_kb": 64, 00:10:36.754 "state": "configuring", 00:10:36.754 "raid_level": "raid0", 00:10:36.754 "superblock": true, 00:10:36.754 "num_base_bdevs": 4, 00:10:36.754 "num_base_bdevs_discovered": 3, 00:10:36.754 "num_base_bdevs_operational": 4, 00:10:36.754 "base_bdevs_list": [ 00:10:36.754 { 00:10:36.754 "name": "BaseBdev1", 00:10:36.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.754 "is_configured": false, 00:10:36.754 "data_offset": 0, 00:10:36.754 "data_size": 0 00:10:36.754 }, 00:10:36.754 { 00:10:36.754 "name": "BaseBdev2", 00:10:36.754 "uuid": "a229a4c2-26c0-4b31-8fe1-5c84a8f5efce", 00:10:36.755 "is_configured": true, 00:10:36.755 "data_offset": 2048, 00:10:36.755 "data_size": 63488 00:10:36.755 }, 00:10:36.755 { 00:10:36.755 "name": "BaseBdev3", 00:10:36.755 "uuid": "161aeae6-ac81-4ef9-9581-f7ef0796c0d2", 00:10:36.755 "is_configured": true, 00:10:36.755 "data_offset": 2048, 00:10:36.755 "data_size": 63488 00:10:36.755 }, 00:10:36.755 { 00:10:36.755 "name": "BaseBdev4", 00:10:36.755 "uuid": "1da3f1ad-9941-434c-ba3f-e621ab1dcc3f", 00:10:36.755 "is_configured": true, 00:10:36.755 "data_offset": 2048, 00:10:36.755 "data_size": 63488 00:10:36.755 } 00:10:36.755 ] 00:10:36.755 }' 00:10:36.755 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.755 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.320 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:37.320 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.320 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.320 [2024-11-15 10:55:43.985337] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:37.320 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.320 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:37.320 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.320 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.320 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.320 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.320 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.320 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.320 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.320 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.320 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.320 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.320 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.320 10:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.320 10:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.320 10:55:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.320 10:55:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.320 "name": "Existed_Raid", 00:10:37.320 "uuid": "d30e7257-8b74-48b4-abb0-6adec76034cc", 00:10:37.320 "strip_size_kb": 64, 00:10:37.320 "state": "configuring", 00:10:37.320 "raid_level": "raid0", 00:10:37.320 "superblock": true, 00:10:37.320 "num_base_bdevs": 4, 00:10:37.320 "num_base_bdevs_discovered": 2, 00:10:37.320 "num_base_bdevs_operational": 4, 00:10:37.320 "base_bdevs_list": [ 00:10:37.320 { 00:10:37.320 "name": "BaseBdev1", 00:10:37.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.320 "is_configured": false, 00:10:37.320 "data_offset": 0, 00:10:37.320 "data_size": 0 00:10:37.320 }, 00:10:37.320 { 00:10:37.320 "name": null, 00:10:37.320 "uuid": "a229a4c2-26c0-4b31-8fe1-5c84a8f5efce", 00:10:37.320 "is_configured": false, 00:10:37.320 "data_offset": 0, 00:10:37.320 "data_size": 63488 00:10:37.320 }, 00:10:37.320 { 00:10:37.320 "name": "BaseBdev3", 00:10:37.321 "uuid": "161aeae6-ac81-4ef9-9581-f7ef0796c0d2", 00:10:37.321 "is_configured": true, 00:10:37.321 "data_offset": 2048, 00:10:37.321 "data_size": 63488 00:10:37.321 }, 00:10:37.321 { 00:10:37.321 "name": "BaseBdev4", 00:10:37.321 "uuid": "1da3f1ad-9941-434c-ba3f-e621ab1dcc3f", 00:10:37.321 "is_configured": true, 00:10:37.321 "data_offset": 2048, 00:10:37.321 "data_size": 63488 00:10:37.321 } 00:10:37.321 ] 00:10:37.321 }' 00:10:37.321 10:55:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.321 10:55:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.578 10:55:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.578 10:55:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.578 10:55:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:37.578 10:55:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.578 10:55:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.578 10:55:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:37.578 10:55:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:37.578 10:55:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.578 10:55:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.837 [2024-11-15 10:55:44.504206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:37.837 BaseBdev1 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.837 [ 00:10:37.837 { 00:10:37.837 "name": "BaseBdev1", 00:10:37.837 "aliases": [ 00:10:37.837 "b97a074a-fcfc-4cfb-acd0-2ff8cff9ea39" 00:10:37.837 ], 00:10:37.837 "product_name": "Malloc disk", 00:10:37.837 "block_size": 512, 00:10:37.837 "num_blocks": 65536, 00:10:37.837 "uuid": "b97a074a-fcfc-4cfb-acd0-2ff8cff9ea39", 00:10:37.837 "assigned_rate_limits": { 00:10:37.837 "rw_ios_per_sec": 0, 00:10:37.837 "rw_mbytes_per_sec": 0, 00:10:37.837 "r_mbytes_per_sec": 0, 00:10:37.837 "w_mbytes_per_sec": 0 00:10:37.837 }, 00:10:37.837 "claimed": true, 00:10:37.837 "claim_type": "exclusive_write", 00:10:37.837 "zoned": false, 00:10:37.837 "supported_io_types": { 00:10:37.837 "read": true, 00:10:37.837 "write": true, 00:10:37.837 "unmap": true, 00:10:37.837 "flush": true, 00:10:37.837 "reset": true, 00:10:37.837 "nvme_admin": false, 00:10:37.837 "nvme_io": false, 00:10:37.837 "nvme_io_md": false, 00:10:37.837 "write_zeroes": true, 00:10:37.837 "zcopy": true, 00:10:37.837 "get_zone_info": false, 00:10:37.837 "zone_management": false, 00:10:37.837 "zone_append": false, 00:10:37.837 "compare": false, 00:10:37.837 "compare_and_write": false, 00:10:37.837 "abort": true, 00:10:37.837 "seek_hole": false, 00:10:37.837 "seek_data": false, 00:10:37.837 "copy": true, 00:10:37.837 "nvme_iov_md": false 00:10:37.837 }, 00:10:37.837 "memory_domains": [ 00:10:37.837 { 00:10:37.837 "dma_device_id": "system", 00:10:37.837 "dma_device_type": 1 00:10:37.837 }, 00:10:37.837 { 00:10:37.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.837 "dma_device_type": 2 00:10:37.837 } 00:10:37.837 ], 00:10:37.837 "driver_specific": {} 00:10:37.837 } 00:10:37.837 ] 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.837 10:55:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.837 "name": "Existed_Raid", 00:10:37.837 "uuid": "d30e7257-8b74-48b4-abb0-6adec76034cc", 00:10:37.837 "strip_size_kb": 64, 00:10:37.837 "state": "configuring", 00:10:37.837 "raid_level": "raid0", 00:10:37.837 "superblock": true, 00:10:37.837 "num_base_bdevs": 4, 00:10:37.837 "num_base_bdevs_discovered": 3, 00:10:37.837 "num_base_bdevs_operational": 4, 00:10:37.837 "base_bdevs_list": [ 00:10:37.837 { 00:10:37.837 "name": "BaseBdev1", 00:10:37.837 "uuid": "b97a074a-fcfc-4cfb-acd0-2ff8cff9ea39", 00:10:37.837 "is_configured": true, 00:10:37.838 "data_offset": 2048, 00:10:37.838 "data_size": 63488 00:10:37.838 }, 00:10:37.838 { 00:10:37.838 "name": null, 00:10:37.838 "uuid": "a229a4c2-26c0-4b31-8fe1-5c84a8f5efce", 00:10:37.838 "is_configured": false, 00:10:37.838 "data_offset": 0, 00:10:37.838 "data_size": 63488 00:10:37.838 }, 00:10:37.838 { 00:10:37.838 "name": "BaseBdev3", 00:10:37.838 "uuid": "161aeae6-ac81-4ef9-9581-f7ef0796c0d2", 00:10:37.838 "is_configured": true, 00:10:37.838 "data_offset": 2048, 00:10:37.838 "data_size": 63488 00:10:37.838 }, 00:10:37.838 { 00:10:37.838 "name": "BaseBdev4", 00:10:37.838 "uuid": "1da3f1ad-9941-434c-ba3f-e621ab1dcc3f", 00:10:37.838 "is_configured": true, 00:10:37.838 "data_offset": 2048, 00:10:37.838 "data_size": 63488 00:10:37.838 } 00:10:37.838 ] 00:10:37.838 }' 00:10:37.838 10:55:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.838 10:55:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.095 10:55:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.095 10:55:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:38.095 10:55:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.095 10:55:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.095 10:55:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.353 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:38.353 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:38.353 10:55:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.353 10:55:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.353 [2024-11-15 10:55:45.035444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:38.353 10:55:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.353 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:38.353 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.353 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.353 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.353 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.353 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.353 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.353 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.353 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.353 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.353 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.353 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.353 10:55:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.353 10:55:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.353 10:55:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.353 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.353 "name": "Existed_Raid", 00:10:38.353 "uuid": "d30e7257-8b74-48b4-abb0-6adec76034cc", 00:10:38.353 "strip_size_kb": 64, 00:10:38.353 "state": "configuring", 00:10:38.353 "raid_level": "raid0", 00:10:38.353 "superblock": true, 00:10:38.353 "num_base_bdevs": 4, 00:10:38.353 "num_base_bdevs_discovered": 2, 00:10:38.353 "num_base_bdevs_operational": 4, 00:10:38.353 "base_bdevs_list": [ 00:10:38.353 { 00:10:38.353 "name": "BaseBdev1", 00:10:38.353 "uuid": "b97a074a-fcfc-4cfb-acd0-2ff8cff9ea39", 00:10:38.353 "is_configured": true, 00:10:38.353 "data_offset": 2048, 00:10:38.353 "data_size": 63488 00:10:38.353 }, 00:10:38.353 { 00:10:38.353 "name": null, 00:10:38.353 "uuid": "a229a4c2-26c0-4b31-8fe1-5c84a8f5efce", 00:10:38.353 "is_configured": false, 00:10:38.353 "data_offset": 0, 00:10:38.353 "data_size": 63488 00:10:38.353 }, 00:10:38.353 { 00:10:38.353 "name": null, 00:10:38.353 "uuid": "161aeae6-ac81-4ef9-9581-f7ef0796c0d2", 00:10:38.353 "is_configured": false, 00:10:38.353 "data_offset": 0, 00:10:38.353 "data_size": 63488 00:10:38.353 }, 00:10:38.353 { 00:10:38.353 "name": "BaseBdev4", 00:10:38.353 "uuid": "1da3f1ad-9941-434c-ba3f-e621ab1dcc3f", 00:10:38.353 "is_configured": true, 00:10:38.353 "data_offset": 2048, 00:10:38.353 "data_size": 63488 00:10:38.353 } 00:10:38.353 ] 00:10:38.353 }' 00:10:38.353 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.353 10:55:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.613 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.613 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:38.613 10:55:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.613 10:55:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.613 10:55:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.613 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:38.613 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:38.613 10:55:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.613 10:55:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.873 [2024-11-15 10:55:45.534585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:38.873 10:55:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.873 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:38.873 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.873 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.873 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.873 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.873 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.873 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.873 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.873 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.873 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.873 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.873 10:55:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.873 10:55:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.873 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.873 10:55:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.873 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.873 "name": "Existed_Raid", 00:10:38.873 "uuid": "d30e7257-8b74-48b4-abb0-6adec76034cc", 00:10:38.873 "strip_size_kb": 64, 00:10:38.873 "state": "configuring", 00:10:38.873 "raid_level": "raid0", 00:10:38.873 "superblock": true, 00:10:38.873 "num_base_bdevs": 4, 00:10:38.873 "num_base_bdevs_discovered": 3, 00:10:38.873 "num_base_bdevs_operational": 4, 00:10:38.873 "base_bdevs_list": [ 00:10:38.873 { 00:10:38.873 "name": "BaseBdev1", 00:10:38.873 "uuid": "b97a074a-fcfc-4cfb-acd0-2ff8cff9ea39", 00:10:38.873 "is_configured": true, 00:10:38.873 "data_offset": 2048, 00:10:38.873 "data_size": 63488 00:10:38.873 }, 00:10:38.873 { 00:10:38.873 "name": null, 00:10:38.873 "uuid": "a229a4c2-26c0-4b31-8fe1-5c84a8f5efce", 00:10:38.873 "is_configured": false, 00:10:38.873 "data_offset": 0, 00:10:38.873 "data_size": 63488 00:10:38.873 }, 00:10:38.873 { 00:10:38.873 "name": "BaseBdev3", 00:10:38.873 "uuid": "161aeae6-ac81-4ef9-9581-f7ef0796c0d2", 00:10:38.873 "is_configured": true, 00:10:38.873 "data_offset": 2048, 00:10:38.873 "data_size": 63488 00:10:38.873 }, 00:10:38.873 { 00:10:38.873 "name": "BaseBdev4", 00:10:38.873 "uuid": "1da3f1ad-9941-434c-ba3f-e621ab1dcc3f", 00:10:38.873 "is_configured": true, 00:10:38.873 "data_offset": 2048, 00:10:38.873 "data_size": 63488 00:10:38.873 } 00:10:38.873 ] 00:10:38.873 }' 00:10:38.873 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.873 10:55:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.133 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.133 10:55:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.133 10:55:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.133 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:39.133 10:55:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.133 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:39.133 10:55:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:39.133 10:55:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.133 10:55:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.133 [2024-11-15 10:55:45.993854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:39.395 10:55:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.395 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:39.395 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.395 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.395 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.395 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.395 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.395 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.395 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.395 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.395 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.395 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.395 10:55:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.395 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.395 10:55:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.395 10:55:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.395 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.395 "name": "Existed_Raid", 00:10:39.395 "uuid": "d30e7257-8b74-48b4-abb0-6adec76034cc", 00:10:39.395 "strip_size_kb": 64, 00:10:39.395 "state": "configuring", 00:10:39.395 "raid_level": "raid0", 00:10:39.395 "superblock": true, 00:10:39.395 "num_base_bdevs": 4, 00:10:39.395 "num_base_bdevs_discovered": 2, 00:10:39.395 "num_base_bdevs_operational": 4, 00:10:39.395 "base_bdevs_list": [ 00:10:39.395 { 00:10:39.395 "name": null, 00:10:39.395 "uuid": "b97a074a-fcfc-4cfb-acd0-2ff8cff9ea39", 00:10:39.395 "is_configured": false, 00:10:39.395 "data_offset": 0, 00:10:39.395 "data_size": 63488 00:10:39.395 }, 00:10:39.395 { 00:10:39.395 "name": null, 00:10:39.395 "uuid": "a229a4c2-26c0-4b31-8fe1-5c84a8f5efce", 00:10:39.395 "is_configured": false, 00:10:39.395 "data_offset": 0, 00:10:39.395 "data_size": 63488 00:10:39.395 }, 00:10:39.395 { 00:10:39.395 "name": "BaseBdev3", 00:10:39.395 "uuid": "161aeae6-ac81-4ef9-9581-f7ef0796c0d2", 00:10:39.395 "is_configured": true, 00:10:39.395 "data_offset": 2048, 00:10:39.395 "data_size": 63488 00:10:39.395 }, 00:10:39.395 { 00:10:39.395 "name": "BaseBdev4", 00:10:39.395 "uuid": "1da3f1ad-9941-434c-ba3f-e621ab1dcc3f", 00:10:39.395 "is_configured": true, 00:10:39.395 "data_offset": 2048, 00:10:39.395 "data_size": 63488 00:10:39.395 } 00:10:39.395 ] 00:10:39.395 }' 00:10:39.395 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.395 10:55:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.963 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.963 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:39.963 10:55:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.963 10:55:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.963 10:55:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.963 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:39.963 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:39.963 10:55:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.963 10:55:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.963 [2024-11-15 10:55:46.642311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.963 10:55:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.963 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:39.963 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.963 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.963 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.963 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.963 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.963 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.963 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.963 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.963 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.963 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.963 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.963 10:55:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.963 10:55:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.963 10:55:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.963 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.963 "name": "Existed_Raid", 00:10:39.963 "uuid": "d30e7257-8b74-48b4-abb0-6adec76034cc", 00:10:39.963 "strip_size_kb": 64, 00:10:39.963 "state": "configuring", 00:10:39.963 "raid_level": "raid0", 00:10:39.963 "superblock": true, 00:10:39.963 "num_base_bdevs": 4, 00:10:39.963 "num_base_bdevs_discovered": 3, 00:10:39.963 "num_base_bdevs_operational": 4, 00:10:39.963 "base_bdevs_list": [ 00:10:39.963 { 00:10:39.963 "name": null, 00:10:39.963 "uuid": "b97a074a-fcfc-4cfb-acd0-2ff8cff9ea39", 00:10:39.963 "is_configured": false, 00:10:39.963 "data_offset": 0, 00:10:39.963 "data_size": 63488 00:10:39.963 }, 00:10:39.963 { 00:10:39.963 "name": "BaseBdev2", 00:10:39.963 "uuid": "a229a4c2-26c0-4b31-8fe1-5c84a8f5efce", 00:10:39.963 "is_configured": true, 00:10:39.963 "data_offset": 2048, 00:10:39.963 "data_size": 63488 00:10:39.963 }, 00:10:39.963 { 00:10:39.963 "name": "BaseBdev3", 00:10:39.963 "uuid": "161aeae6-ac81-4ef9-9581-f7ef0796c0d2", 00:10:39.963 "is_configured": true, 00:10:39.963 "data_offset": 2048, 00:10:39.963 "data_size": 63488 00:10:39.963 }, 00:10:39.963 { 00:10:39.963 "name": "BaseBdev4", 00:10:39.963 "uuid": "1da3f1ad-9941-434c-ba3f-e621ab1dcc3f", 00:10:39.963 "is_configured": true, 00:10:39.963 "data_offset": 2048, 00:10:39.963 "data_size": 63488 00:10:39.963 } 00:10:39.963 ] 00:10:39.963 }' 00:10:39.963 10:55:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.963 10:55:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.223 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.223 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.223 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:40.223 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.223 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.223 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:40.223 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:40.223 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.223 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.223 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.223 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.223 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b97a074a-fcfc-4cfb-acd0-2ff8cff9ea39 00:10:40.223 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.223 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.482 [2024-11-15 10:55:47.177729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:40.482 [2024-11-15 10:55:47.177977] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:40.482 [2024-11-15 10:55:47.177989] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:40.482 [2024-11-15 10:55:47.178242] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:40.482 [2024-11-15 10:55:47.178447] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:40.482 [2024-11-15 10:55:47.178462] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:40.482 [2024-11-15 10:55:47.178599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.482 NewBaseBdev 00:10:40.482 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.482 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:40.482 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:40.482 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:40.482 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:40.482 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:40.482 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:40.483 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:40.483 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.483 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.483 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.483 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:40.483 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.483 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.483 [ 00:10:40.483 { 00:10:40.483 "name": "NewBaseBdev", 00:10:40.483 "aliases": [ 00:10:40.483 "b97a074a-fcfc-4cfb-acd0-2ff8cff9ea39" 00:10:40.483 ], 00:10:40.483 "product_name": "Malloc disk", 00:10:40.483 "block_size": 512, 00:10:40.483 "num_blocks": 65536, 00:10:40.483 "uuid": "b97a074a-fcfc-4cfb-acd0-2ff8cff9ea39", 00:10:40.483 "assigned_rate_limits": { 00:10:40.483 "rw_ios_per_sec": 0, 00:10:40.483 "rw_mbytes_per_sec": 0, 00:10:40.483 "r_mbytes_per_sec": 0, 00:10:40.483 "w_mbytes_per_sec": 0 00:10:40.483 }, 00:10:40.483 "claimed": true, 00:10:40.483 "claim_type": "exclusive_write", 00:10:40.483 "zoned": false, 00:10:40.483 "supported_io_types": { 00:10:40.483 "read": true, 00:10:40.483 "write": true, 00:10:40.483 "unmap": true, 00:10:40.483 "flush": true, 00:10:40.483 "reset": true, 00:10:40.483 "nvme_admin": false, 00:10:40.483 "nvme_io": false, 00:10:40.483 "nvme_io_md": false, 00:10:40.483 "write_zeroes": true, 00:10:40.483 "zcopy": true, 00:10:40.483 "get_zone_info": false, 00:10:40.483 "zone_management": false, 00:10:40.483 "zone_append": false, 00:10:40.483 "compare": false, 00:10:40.483 "compare_and_write": false, 00:10:40.483 "abort": true, 00:10:40.483 "seek_hole": false, 00:10:40.483 "seek_data": false, 00:10:40.483 "copy": true, 00:10:40.483 "nvme_iov_md": false 00:10:40.483 }, 00:10:40.483 "memory_domains": [ 00:10:40.483 { 00:10:40.483 "dma_device_id": "system", 00:10:40.483 "dma_device_type": 1 00:10:40.483 }, 00:10:40.483 { 00:10:40.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.483 "dma_device_type": 2 00:10:40.483 } 00:10:40.483 ], 00:10:40.483 "driver_specific": {} 00:10:40.483 } 00:10:40.483 ] 00:10:40.483 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.483 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:40.483 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:40.483 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.483 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.483 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.483 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.483 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.483 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.483 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.483 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.483 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.483 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.483 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.483 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.483 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.483 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.483 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.483 "name": "Existed_Raid", 00:10:40.483 "uuid": "d30e7257-8b74-48b4-abb0-6adec76034cc", 00:10:40.483 "strip_size_kb": 64, 00:10:40.483 "state": "online", 00:10:40.483 "raid_level": "raid0", 00:10:40.483 "superblock": true, 00:10:40.483 "num_base_bdevs": 4, 00:10:40.483 "num_base_bdevs_discovered": 4, 00:10:40.483 "num_base_bdevs_operational": 4, 00:10:40.483 "base_bdevs_list": [ 00:10:40.483 { 00:10:40.483 "name": "NewBaseBdev", 00:10:40.483 "uuid": "b97a074a-fcfc-4cfb-acd0-2ff8cff9ea39", 00:10:40.483 "is_configured": true, 00:10:40.483 "data_offset": 2048, 00:10:40.483 "data_size": 63488 00:10:40.483 }, 00:10:40.483 { 00:10:40.483 "name": "BaseBdev2", 00:10:40.483 "uuid": "a229a4c2-26c0-4b31-8fe1-5c84a8f5efce", 00:10:40.483 "is_configured": true, 00:10:40.483 "data_offset": 2048, 00:10:40.483 "data_size": 63488 00:10:40.483 }, 00:10:40.483 { 00:10:40.483 "name": "BaseBdev3", 00:10:40.483 "uuid": "161aeae6-ac81-4ef9-9581-f7ef0796c0d2", 00:10:40.483 "is_configured": true, 00:10:40.483 "data_offset": 2048, 00:10:40.483 "data_size": 63488 00:10:40.483 }, 00:10:40.483 { 00:10:40.483 "name": "BaseBdev4", 00:10:40.483 "uuid": "1da3f1ad-9941-434c-ba3f-e621ab1dcc3f", 00:10:40.483 "is_configured": true, 00:10:40.483 "data_offset": 2048, 00:10:40.483 "data_size": 63488 00:10:40.483 } 00:10:40.483 ] 00:10:40.483 }' 00:10:40.483 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.483 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.052 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:41.052 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:41.052 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:41.052 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:41.052 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.053 [2024-11-15 10:55:47.697341] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:41.053 "name": "Existed_Raid", 00:10:41.053 "aliases": [ 00:10:41.053 "d30e7257-8b74-48b4-abb0-6adec76034cc" 00:10:41.053 ], 00:10:41.053 "product_name": "Raid Volume", 00:10:41.053 "block_size": 512, 00:10:41.053 "num_blocks": 253952, 00:10:41.053 "uuid": "d30e7257-8b74-48b4-abb0-6adec76034cc", 00:10:41.053 "assigned_rate_limits": { 00:10:41.053 "rw_ios_per_sec": 0, 00:10:41.053 "rw_mbytes_per_sec": 0, 00:10:41.053 "r_mbytes_per_sec": 0, 00:10:41.053 "w_mbytes_per_sec": 0 00:10:41.053 }, 00:10:41.053 "claimed": false, 00:10:41.053 "zoned": false, 00:10:41.053 "supported_io_types": { 00:10:41.053 "read": true, 00:10:41.053 "write": true, 00:10:41.053 "unmap": true, 00:10:41.053 "flush": true, 00:10:41.053 "reset": true, 00:10:41.053 "nvme_admin": false, 00:10:41.053 "nvme_io": false, 00:10:41.053 "nvme_io_md": false, 00:10:41.053 "write_zeroes": true, 00:10:41.053 "zcopy": false, 00:10:41.053 "get_zone_info": false, 00:10:41.053 "zone_management": false, 00:10:41.053 "zone_append": false, 00:10:41.053 "compare": false, 00:10:41.053 "compare_and_write": false, 00:10:41.053 "abort": false, 00:10:41.053 "seek_hole": false, 00:10:41.053 "seek_data": false, 00:10:41.053 "copy": false, 00:10:41.053 "nvme_iov_md": false 00:10:41.053 }, 00:10:41.053 "memory_domains": [ 00:10:41.053 { 00:10:41.053 "dma_device_id": "system", 00:10:41.053 "dma_device_type": 1 00:10:41.053 }, 00:10:41.053 { 00:10:41.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.053 "dma_device_type": 2 00:10:41.053 }, 00:10:41.053 { 00:10:41.053 "dma_device_id": "system", 00:10:41.053 "dma_device_type": 1 00:10:41.053 }, 00:10:41.053 { 00:10:41.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.053 "dma_device_type": 2 00:10:41.053 }, 00:10:41.053 { 00:10:41.053 "dma_device_id": "system", 00:10:41.053 "dma_device_type": 1 00:10:41.053 }, 00:10:41.053 { 00:10:41.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.053 "dma_device_type": 2 00:10:41.053 }, 00:10:41.053 { 00:10:41.053 "dma_device_id": "system", 00:10:41.053 "dma_device_type": 1 00:10:41.053 }, 00:10:41.053 { 00:10:41.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.053 "dma_device_type": 2 00:10:41.053 } 00:10:41.053 ], 00:10:41.053 "driver_specific": { 00:10:41.053 "raid": { 00:10:41.053 "uuid": "d30e7257-8b74-48b4-abb0-6adec76034cc", 00:10:41.053 "strip_size_kb": 64, 00:10:41.053 "state": "online", 00:10:41.053 "raid_level": "raid0", 00:10:41.053 "superblock": true, 00:10:41.053 "num_base_bdevs": 4, 00:10:41.053 "num_base_bdevs_discovered": 4, 00:10:41.053 "num_base_bdevs_operational": 4, 00:10:41.053 "base_bdevs_list": [ 00:10:41.053 { 00:10:41.053 "name": "NewBaseBdev", 00:10:41.053 "uuid": "b97a074a-fcfc-4cfb-acd0-2ff8cff9ea39", 00:10:41.053 "is_configured": true, 00:10:41.053 "data_offset": 2048, 00:10:41.053 "data_size": 63488 00:10:41.053 }, 00:10:41.053 { 00:10:41.053 "name": "BaseBdev2", 00:10:41.053 "uuid": "a229a4c2-26c0-4b31-8fe1-5c84a8f5efce", 00:10:41.053 "is_configured": true, 00:10:41.053 "data_offset": 2048, 00:10:41.053 "data_size": 63488 00:10:41.053 }, 00:10:41.053 { 00:10:41.053 "name": "BaseBdev3", 00:10:41.053 "uuid": "161aeae6-ac81-4ef9-9581-f7ef0796c0d2", 00:10:41.053 "is_configured": true, 00:10:41.053 "data_offset": 2048, 00:10:41.053 "data_size": 63488 00:10:41.053 }, 00:10:41.053 { 00:10:41.053 "name": "BaseBdev4", 00:10:41.053 "uuid": "1da3f1ad-9941-434c-ba3f-e621ab1dcc3f", 00:10:41.053 "is_configured": true, 00:10:41.053 "data_offset": 2048, 00:10:41.053 "data_size": 63488 00:10:41.053 } 00:10:41.053 ] 00:10:41.053 } 00:10:41.053 } 00:10:41.053 }' 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:41.053 BaseBdev2 00:10:41.053 BaseBdev3 00:10:41.053 BaseBdev4' 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.053 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.313 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.313 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.313 10:55:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:41.313 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.313 10:55:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.313 [2024-11-15 10:55:48.000424] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:41.313 [2024-11-15 10:55:48.000549] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:41.313 [2024-11-15 10:55:48.000671] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.313 [2024-11-15 10:55:48.000783] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:41.313 [2024-11-15 10:55:48.000836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:41.313 10:55:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.313 10:55:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70197 00:10:41.313 10:55:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 70197 ']' 00:10:41.313 10:55:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 70197 00:10:41.313 10:55:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:10:41.313 10:55:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:41.313 10:55:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70197 00:10:41.313 10:55:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:41.313 10:55:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:41.313 10:55:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70197' 00:10:41.313 killing process with pid 70197 00:10:41.313 10:55:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 70197 00:10:41.313 [2024-11-15 10:55:48.047533] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:41.313 10:55:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 70197 00:10:41.882 [2024-11-15 10:55:48.499650] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:42.821 ************************************ 00:10:42.821 END TEST raid_state_function_test_sb 00:10:42.821 ************************************ 00:10:42.821 10:55:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:42.821 00:10:42.821 real 0m11.863s 00:10:42.821 user 0m18.753s 00:10:42.821 sys 0m2.041s 00:10:42.821 10:55:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:42.821 10:55:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.081 10:55:49 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:43.081 10:55:49 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:43.081 10:55:49 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:43.081 10:55:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:43.081 ************************************ 00:10:43.081 START TEST raid_superblock_test 00:10:43.081 ************************************ 00:10:43.081 10:55:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 4 00:10:43.081 10:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:43.081 10:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:43.081 10:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:43.081 10:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:43.081 10:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:43.081 10:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:43.081 10:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:43.081 10:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:43.081 10:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:43.081 10:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:43.081 10:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:43.081 10:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:43.081 10:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:43.081 10:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:43.081 10:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:43.081 10:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:43.081 10:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70869 00:10:43.081 10:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70869 00:10:43.081 10:55:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 70869 ']' 00:10:43.081 10:55:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.081 10:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:43.081 10:55:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:43.081 10:55:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.081 10:55:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:43.081 10:55:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.081 [2024-11-15 10:55:49.902684] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:10:43.081 [2024-11-15 10:55:49.902911] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70869 ] 00:10:43.341 [2024-11-15 10:55:50.076772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.341 [2024-11-15 10:55:50.193807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.601 [2024-11-15 10:55:50.404352] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.601 [2024-11-15 10:55:50.404501] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.171 malloc1 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.171 [2024-11-15 10:55:50.862790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:44.171 [2024-11-15 10:55:50.862974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.171 [2024-11-15 10:55:50.863032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:44.171 [2024-11-15 10:55:50.863076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.171 [2024-11-15 10:55:50.865544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.171 [2024-11-15 10:55:50.865623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:44.171 pt1 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.171 malloc2 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.171 [2024-11-15 10:55:50.925150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:44.171 [2024-11-15 10:55:50.925219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.171 [2024-11-15 10:55:50.925241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:44.171 [2024-11-15 10:55:50.925250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.171 [2024-11-15 10:55:50.927388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.171 [2024-11-15 10:55:50.927504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:44.171 pt2 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.171 malloc3 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.171 10:55:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.171 [2024-11-15 10:55:50.993351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:44.171 [2024-11-15 10:55:50.993488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.172 [2024-11-15 10:55:50.993527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:44.172 [2024-11-15 10:55:50.993557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.172 [2024-11-15 10:55:50.995694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.172 [2024-11-15 10:55:50.995762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:44.172 pt3 00:10:44.172 10:55:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.172 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:44.172 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:44.172 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:44.172 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:44.172 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:44.172 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:44.172 10:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:44.172 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:44.172 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:44.172 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.172 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.172 malloc4 00:10:44.172 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.172 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:44.172 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.172 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.172 [2024-11-15 10:55:51.053305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:44.172 [2024-11-15 10:55:51.053456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.172 [2024-11-15 10:55:51.053491] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:44.172 [2024-11-15 10:55:51.053518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.172 [2024-11-15 10:55:51.055516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.172 [2024-11-15 10:55:51.055595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:44.172 pt4 00:10:44.172 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.172 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:44.172 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:44.172 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:44.172 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.172 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.172 [2024-11-15 10:55:51.065329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:44.172 [2024-11-15 10:55:51.067062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:44.172 [2024-11-15 10:55:51.067161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:44.172 [2024-11-15 10:55:51.067241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:44.172 [2024-11-15 10:55:51.067459] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:44.172 [2024-11-15 10:55:51.067505] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:44.172 [2024-11-15 10:55:51.067760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:44.172 [2024-11-15 10:55:51.067957] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:44.172 [2024-11-15 10:55:51.068002] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:44.172 [2024-11-15 10:55:51.068244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.172 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.172 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:44.172 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.172 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.172 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.172 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.172 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.172 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.172 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.172 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.172 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.172 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.172 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.172 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.172 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.432 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.432 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.432 "name": "raid_bdev1", 00:10:44.432 "uuid": "2bb42504-238a-4d81-a9e2-c022647ed06a", 00:10:44.432 "strip_size_kb": 64, 00:10:44.432 "state": "online", 00:10:44.432 "raid_level": "raid0", 00:10:44.432 "superblock": true, 00:10:44.432 "num_base_bdevs": 4, 00:10:44.432 "num_base_bdevs_discovered": 4, 00:10:44.432 "num_base_bdevs_operational": 4, 00:10:44.432 "base_bdevs_list": [ 00:10:44.432 { 00:10:44.432 "name": "pt1", 00:10:44.432 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:44.432 "is_configured": true, 00:10:44.432 "data_offset": 2048, 00:10:44.432 "data_size": 63488 00:10:44.432 }, 00:10:44.432 { 00:10:44.432 "name": "pt2", 00:10:44.432 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:44.432 "is_configured": true, 00:10:44.432 "data_offset": 2048, 00:10:44.432 "data_size": 63488 00:10:44.432 }, 00:10:44.432 { 00:10:44.432 "name": "pt3", 00:10:44.432 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:44.432 "is_configured": true, 00:10:44.432 "data_offset": 2048, 00:10:44.432 "data_size": 63488 00:10:44.432 }, 00:10:44.432 { 00:10:44.432 "name": "pt4", 00:10:44.432 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:44.432 "is_configured": true, 00:10:44.432 "data_offset": 2048, 00:10:44.432 "data_size": 63488 00:10:44.432 } 00:10:44.432 ] 00:10:44.432 }' 00:10:44.432 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.432 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.691 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:44.691 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:44.691 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:44.691 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:44.691 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:44.691 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:44.691 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:44.691 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:44.691 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.691 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.691 [2024-11-15 10:55:51.544914] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:44.691 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.692 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:44.692 "name": "raid_bdev1", 00:10:44.692 "aliases": [ 00:10:44.692 "2bb42504-238a-4d81-a9e2-c022647ed06a" 00:10:44.692 ], 00:10:44.692 "product_name": "Raid Volume", 00:10:44.692 "block_size": 512, 00:10:44.692 "num_blocks": 253952, 00:10:44.692 "uuid": "2bb42504-238a-4d81-a9e2-c022647ed06a", 00:10:44.692 "assigned_rate_limits": { 00:10:44.692 "rw_ios_per_sec": 0, 00:10:44.692 "rw_mbytes_per_sec": 0, 00:10:44.692 "r_mbytes_per_sec": 0, 00:10:44.692 "w_mbytes_per_sec": 0 00:10:44.692 }, 00:10:44.692 "claimed": false, 00:10:44.692 "zoned": false, 00:10:44.692 "supported_io_types": { 00:10:44.692 "read": true, 00:10:44.692 "write": true, 00:10:44.692 "unmap": true, 00:10:44.692 "flush": true, 00:10:44.692 "reset": true, 00:10:44.692 "nvme_admin": false, 00:10:44.692 "nvme_io": false, 00:10:44.692 "nvme_io_md": false, 00:10:44.692 "write_zeroes": true, 00:10:44.692 "zcopy": false, 00:10:44.692 "get_zone_info": false, 00:10:44.692 "zone_management": false, 00:10:44.692 "zone_append": false, 00:10:44.692 "compare": false, 00:10:44.692 "compare_and_write": false, 00:10:44.692 "abort": false, 00:10:44.692 "seek_hole": false, 00:10:44.692 "seek_data": false, 00:10:44.692 "copy": false, 00:10:44.692 "nvme_iov_md": false 00:10:44.692 }, 00:10:44.692 "memory_domains": [ 00:10:44.692 { 00:10:44.692 "dma_device_id": "system", 00:10:44.692 "dma_device_type": 1 00:10:44.692 }, 00:10:44.692 { 00:10:44.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.692 "dma_device_type": 2 00:10:44.692 }, 00:10:44.692 { 00:10:44.692 "dma_device_id": "system", 00:10:44.692 "dma_device_type": 1 00:10:44.692 }, 00:10:44.692 { 00:10:44.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.692 "dma_device_type": 2 00:10:44.692 }, 00:10:44.692 { 00:10:44.692 "dma_device_id": "system", 00:10:44.692 "dma_device_type": 1 00:10:44.692 }, 00:10:44.692 { 00:10:44.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.692 "dma_device_type": 2 00:10:44.692 }, 00:10:44.692 { 00:10:44.692 "dma_device_id": "system", 00:10:44.692 "dma_device_type": 1 00:10:44.692 }, 00:10:44.692 { 00:10:44.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.692 "dma_device_type": 2 00:10:44.692 } 00:10:44.692 ], 00:10:44.692 "driver_specific": { 00:10:44.692 "raid": { 00:10:44.692 "uuid": "2bb42504-238a-4d81-a9e2-c022647ed06a", 00:10:44.692 "strip_size_kb": 64, 00:10:44.692 "state": "online", 00:10:44.692 "raid_level": "raid0", 00:10:44.692 "superblock": true, 00:10:44.692 "num_base_bdevs": 4, 00:10:44.692 "num_base_bdevs_discovered": 4, 00:10:44.692 "num_base_bdevs_operational": 4, 00:10:44.692 "base_bdevs_list": [ 00:10:44.692 { 00:10:44.692 "name": "pt1", 00:10:44.692 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:44.692 "is_configured": true, 00:10:44.692 "data_offset": 2048, 00:10:44.692 "data_size": 63488 00:10:44.692 }, 00:10:44.692 { 00:10:44.692 "name": "pt2", 00:10:44.692 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:44.692 "is_configured": true, 00:10:44.692 "data_offset": 2048, 00:10:44.692 "data_size": 63488 00:10:44.692 }, 00:10:44.692 { 00:10:44.692 "name": "pt3", 00:10:44.692 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:44.692 "is_configured": true, 00:10:44.692 "data_offset": 2048, 00:10:44.692 "data_size": 63488 00:10:44.692 }, 00:10:44.692 { 00:10:44.692 "name": "pt4", 00:10:44.692 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:44.692 "is_configured": true, 00:10:44.692 "data_offset": 2048, 00:10:44.692 "data_size": 63488 00:10:44.692 } 00:10:44.692 ] 00:10:44.692 } 00:10:44.692 } 00:10:44.692 }' 00:10:44.692 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:44.951 pt2 00:10:44.951 pt3 00:10:44.951 pt4' 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.951 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.952 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.952 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.952 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:44.952 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.952 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.952 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:44.952 [2024-11-15 10:55:51.848386] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:44.952 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.212 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2bb42504-238a-4d81-a9e2-c022647ed06a 00:10:45.212 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2bb42504-238a-4d81-a9e2-c022647ed06a ']' 00:10:45.212 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:45.212 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.212 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.212 [2024-11-15 10:55:51.899982] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:45.212 [2024-11-15 10:55:51.900060] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:45.212 [2024-11-15 10:55:51.900180] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:45.212 [2024-11-15 10:55:51.900283] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:45.212 [2024-11-15 10:55:51.900352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:45.212 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.212 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.212 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:45.212 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.212 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.212 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.212 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:45.212 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:45.212 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:45.212 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:45.212 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.212 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.212 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.212 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:45.213 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:45.213 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.213 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.213 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.213 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:45.213 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:45.213 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.213 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.213 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.213 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:45.213 10:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:45.213 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.213 10:55:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.213 [2024-11-15 10:55:52.055710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:45.213 [2024-11-15 10:55:52.057788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:45.213 [2024-11-15 10:55:52.057833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:45.213 [2024-11-15 10:55:52.057867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:45.213 [2024-11-15 10:55:52.057917] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:45.213 [2024-11-15 10:55:52.057967] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:45.213 [2024-11-15 10:55:52.057986] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:45.213 [2024-11-15 10:55:52.058004] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:45.213 [2024-11-15 10:55:52.058017] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:45.213 [2024-11-15 10:55:52.058030] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:45.213 request: 00:10:45.213 { 00:10:45.213 "name": "raid_bdev1", 00:10:45.213 "raid_level": "raid0", 00:10:45.213 "base_bdevs": [ 00:10:45.213 "malloc1", 00:10:45.213 "malloc2", 00:10:45.213 "malloc3", 00:10:45.213 "malloc4" 00:10:45.213 ], 00:10:45.213 "strip_size_kb": 64, 00:10:45.213 "superblock": false, 00:10:45.213 "method": "bdev_raid_create", 00:10:45.213 "req_id": 1 00:10:45.213 } 00:10:45.213 Got JSON-RPC error response 00:10:45.213 response: 00:10:45.213 { 00:10:45.213 "code": -17, 00:10:45.213 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:45.213 } 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.213 [2024-11-15 10:55:52.119599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:45.213 [2024-11-15 10:55:52.119747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.213 [2024-11-15 10:55:52.119787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:45.213 [2024-11-15 10:55:52.119833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.213 [2024-11-15 10:55:52.122216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.213 [2024-11-15 10:55:52.122311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:45.213 [2024-11-15 10:55:52.122426] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:45.213 [2024-11-15 10:55:52.122520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:45.213 pt1 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.213 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.474 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.474 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.474 "name": "raid_bdev1", 00:10:45.474 "uuid": "2bb42504-238a-4d81-a9e2-c022647ed06a", 00:10:45.474 "strip_size_kb": 64, 00:10:45.474 "state": "configuring", 00:10:45.474 "raid_level": "raid0", 00:10:45.474 "superblock": true, 00:10:45.474 "num_base_bdevs": 4, 00:10:45.474 "num_base_bdevs_discovered": 1, 00:10:45.474 "num_base_bdevs_operational": 4, 00:10:45.474 "base_bdevs_list": [ 00:10:45.474 { 00:10:45.474 "name": "pt1", 00:10:45.474 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:45.474 "is_configured": true, 00:10:45.474 "data_offset": 2048, 00:10:45.474 "data_size": 63488 00:10:45.474 }, 00:10:45.474 { 00:10:45.474 "name": null, 00:10:45.474 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:45.474 "is_configured": false, 00:10:45.474 "data_offset": 2048, 00:10:45.474 "data_size": 63488 00:10:45.474 }, 00:10:45.474 { 00:10:45.474 "name": null, 00:10:45.474 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:45.474 "is_configured": false, 00:10:45.474 "data_offset": 2048, 00:10:45.474 "data_size": 63488 00:10:45.474 }, 00:10:45.474 { 00:10:45.474 "name": null, 00:10:45.474 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:45.474 "is_configured": false, 00:10:45.474 "data_offset": 2048, 00:10:45.474 "data_size": 63488 00:10:45.474 } 00:10:45.474 ] 00:10:45.474 }' 00:10:45.474 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.474 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.734 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:45.734 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:45.734 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.734 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.734 [2024-11-15 10:55:52.570818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:45.734 [2024-11-15 10:55:52.570910] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.734 [2024-11-15 10:55:52.570931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:45.734 [2024-11-15 10:55:52.570942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.734 [2024-11-15 10:55:52.571447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.734 [2024-11-15 10:55:52.571471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:45.734 [2024-11-15 10:55:52.571559] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:45.734 [2024-11-15 10:55:52.571591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:45.734 pt2 00:10:45.734 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.734 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:45.734 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.734 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.734 [2024-11-15 10:55:52.582813] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:45.734 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.734 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:45.734 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.734 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.734 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.734 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.734 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.734 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.734 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.734 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.734 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.734 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.734 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.734 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.734 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.734 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.734 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.734 "name": "raid_bdev1", 00:10:45.734 "uuid": "2bb42504-238a-4d81-a9e2-c022647ed06a", 00:10:45.734 "strip_size_kb": 64, 00:10:45.734 "state": "configuring", 00:10:45.734 "raid_level": "raid0", 00:10:45.734 "superblock": true, 00:10:45.734 "num_base_bdevs": 4, 00:10:45.734 "num_base_bdevs_discovered": 1, 00:10:45.734 "num_base_bdevs_operational": 4, 00:10:45.734 "base_bdevs_list": [ 00:10:45.734 { 00:10:45.734 "name": "pt1", 00:10:45.734 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:45.734 "is_configured": true, 00:10:45.734 "data_offset": 2048, 00:10:45.734 "data_size": 63488 00:10:45.734 }, 00:10:45.734 { 00:10:45.734 "name": null, 00:10:45.734 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:45.734 "is_configured": false, 00:10:45.734 "data_offset": 0, 00:10:45.734 "data_size": 63488 00:10:45.734 }, 00:10:45.734 { 00:10:45.734 "name": null, 00:10:45.734 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:45.734 "is_configured": false, 00:10:45.734 "data_offset": 2048, 00:10:45.734 "data_size": 63488 00:10:45.734 }, 00:10:45.734 { 00:10:45.734 "name": null, 00:10:45.734 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:45.734 "is_configured": false, 00:10:45.734 "data_offset": 2048, 00:10:45.734 "data_size": 63488 00:10:45.734 } 00:10:45.734 ] 00:10:45.734 }' 00:10:45.734 10:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.734 10:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.310 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:46.310 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:46.310 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:46.310 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.310 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.311 [2024-11-15 10:55:53.038053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:46.311 [2024-11-15 10:55:53.038188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.311 [2024-11-15 10:55:53.038215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:46.311 [2024-11-15 10:55:53.038225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.311 [2024-11-15 10:55:53.038733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.311 [2024-11-15 10:55:53.038752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:46.311 [2024-11-15 10:55:53.038844] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:46.311 [2024-11-15 10:55:53.038867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:46.311 pt2 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.311 [2024-11-15 10:55:53.050023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:46.311 [2024-11-15 10:55:53.050091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.311 [2024-11-15 10:55:53.050119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:46.311 [2024-11-15 10:55:53.050131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.311 [2024-11-15 10:55:53.050632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.311 [2024-11-15 10:55:53.050656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:46.311 [2024-11-15 10:55:53.050747] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:46.311 [2024-11-15 10:55:53.050770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:46.311 pt3 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.311 [2024-11-15 10:55:53.061954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:46.311 [2024-11-15 10:55:53.062008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.311 [2024-11-15 10:55:53.062027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:46.311 [2024-11-15 10:55:53.062035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.311 [2024-11-15 10:55:53.062439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.311 [2024-11-15 10:55:53.062460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:46.311 [2024-11-15 10:55:53.062528] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:46.311 [2024-11-15 10:55:53.062547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:46.311 [2024-11-15 10:55:53.062712] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:46.311 [2024-11-15 10:55:53.062721] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:46.311 [2024-11-15 10:55:53.062967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:46.311 [2024-11-15 10:55:53.063130] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:46.311 [2024-11-15 10:55:53.063144] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:46.311 [2024-11-15 10:55:53.063280] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.311 pt4 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.311 "name": "raid_bdev1", 00:10:46.311 "uuid": "2bb42504-238a-4d81-a9e2-c022647ed06a", 00:10:46.311 "strip_size_kb": 64, 00:10:46.311 "state": "online", 00:10:46.311 "raid_level": "raid0", 00:10:46.311 "superblock": true, 00:10:46.311 "num_base_bdevs": 4, 00:10:46.311 "num_base_bdevs_discovered": 4, 00:10:46.311 "num_base_bdevs_operational": 4, 00:10:46.311 "base_bdevs_list": [ 00:10:46.311 { 00:10:46.311 "name": "pt1", 00:10:46.311 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:46.311 "is_configured": true, 00:10:46.311 "data_offset": 2048, 00:10:46.311 "data_size": 63488 00:10:46.311 }, 00:10:46.311 { 00:10:46.311 "name": "pt2", 00:10:46.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.311 "is_configured": true, 00:10:46.311 "data_offset": 2048, 00:10:46.311 "data_size": 63488 00:10:46.311 }, 00:10:46.311 { 00:10:46.311 "name": "pt3", 00:10:46.311 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:46.311 "is_configured": true, 00:10:46.311 "data_offset": 2048, 00:10:46.311 "data_size": 63488 00:10:46.311 }, 00:10:46.311 { 00:10:46.311 "name": "pt4", 00:10:46.311 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:46.311 "is_configured": true, 00:10:46.311 "data_offset": 2048, 00:10:46.311 "data_size": 63488 00:10:46.311 } 00:10:46.311 ] 00:10:46.311 }' 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.311 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.878 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:46.878 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:46.878 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:46.878 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:46.878 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:46.878 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:46.878 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:46.878 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.878 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.878 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:46.878 [2024-11-15 10:55:53.545577] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.878 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.878 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:46.878 "name": "raid_bdev1", 00:10:46.878 "aliases": [ 00:10:46.878 "2bb42504-238a-4d81-a9e2-c022647ed06a" 00:10:46.878 ], 00:10:46.878 "product_name": "Raid Volume", 00:10:46.878 "block_size": 512, 00:10:46.878 "num_blocks": 253952, 00:10:46.878 "uuid": "2bb42504-238a-4d81-a9e2-c022647ed06a", 00:10:46.878 "assigned_rate_limits": { 00:10:46.878 "rw_ios_per_sec": 0, 00:10:46.878 "rw_mbytes_per_sec": 0, 00:10:46.878 "r_mbytes_per_sec": 0, 00:10:46.878 "w_mbytes_per_sec": 0 00:10:46.878 }, 00:10:46.878 "claimed": false, 00:10:46.878 "zoned": false, 00:10:46.878 "supported_io_types": { 00:10:46.878 "read": true, 00:10:46.878 "write": true, 00:10:46.878 "unmap": true, 00:10:46.878 "flush": true, 00:10:46.878 "reset": true, 00:10:46.878 "nvme_admin": false, 00:10:46.878 "nvme_io": false, 00:10:46.878 "nvme_io_md": false, 00:10:46.878 "write_zeroes": true, 00:10:46.878 "zcopy": false, 00:10:46.878 "get_zone_info": false, 00:10:46.878 "zone_management": false, 00:10:46.878 "zone_append": false, 00:10:46.878 "compare": false, 00:10:46.878 "compare_and_write": false, 00:10:46.878 "abort": false, 00:10:46.878 "seek_hole": false, 00:10:46.878 "seek_data": false, 00:10:46.878 "copy": false, 00:10:46.878 "nvme_iov_md": false 00:10:46.878 }, 00:10:46.878 "memory_domains": [ 00:10:46.878 { 00:10:46.878 "dma_device_id": "system", 00:10:46.878 "dma_device_type": 1 00:10:46.878 }, 00:10:46.878 { 00:10:46.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.878 "dma_device_type": 2 00:10:46.878 }, 00:10:46.878 { 00:10:46.878 "dma_device_id": "system", 00:10:46.878 "dma_device_type": 1 00:10:46.878 }, 00:10:46.878 { 00:10:46.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.878 "dma_device_type": 2 00:10:46.878 }, 00:10:46.878 { 00:10:46.878 "dma_device_id": "system", 00:10:46.878 "dma_device_type": 1 00:10:46.878 }, 00:10:46.878 { 00:10:46.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.878 "dma_device_type": 2 00:10:46.878 }, 00:10:46.878 { 00:10:46.878 "dma_device_id": "system", 00:10:46.878 "dma_device_type": 1 00:10:46.878 }, 00:10:46.878 { 00:10:46.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.878 "dma_device_type": 2 00:10:46.878 } 00:10:46.878 ], 00:10:46.878 "driver_specific": { 00:10:46.878 "raid": { 00:10:46.878 "uuid": "2bb42504-238a-4d81-a9e2-c022647ed06a", 00:10:46.878 "strip_size_kb": 64, 00:10:46.878 "state": "online", 00:10:46.878 "raid_level": "raid0", 00:10:46.878 "superblock": true, 00:10:46.878 "num_base_bdevs": 4, 00:10:46.878 "num_base_bdevs_discovered": 4, 00:10:46.878 "num_base_bdevs_operational": 4, 00:10:46.878 "base_bdevs_list": [ 00:10:46.878 { 00:10:46.878 "name": "pt1", 00:10:46.878 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:46.878 "is_configured": true, 00:10:46.878 "data_offset": 2048, 00:10:46.878 "data_size": 63488 00:10:46.878 }, 00:10:46.878 { 00:10:46.878 "name": "pt2", 00:10:46.878 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.878 "is_configured": true, 00:10:46.878 "data_offset": 2048, 00:10:46.878 "data_size": 63488 00:10:46.878 }, 00:10:46.878 { 00:10:46.878 "name": "pt3", 00:10:46.878 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:46.878 "is_configured": true, 00:10:46.879 "data_offset": 2048, 00:10:46.879 "data_size": 63488 00:10:46.879 }, 00:10:46.879 { 00:10:46.879 "name": "pt4", 00:10:46.879 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:46.879 "is_configured": true, 00:10:46.879 "data_offset": 2048, 00:10:46.879 "data_size": 63488 00:10:46.879 } 00:10:46.879 ] 00:10:46.879 } 00:10:46.879 } 00:10:46.879 }' 00:10:46.879 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:46.879 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:46.879 pt2 00:10:46.879 pt3 00:10:46.879 pt4' 00:10:46.879 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.879 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:46.879 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.879 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:46.879 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.879 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.879 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.879 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.879 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.879 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.879 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.879 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:46.879 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.879 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.879 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.879 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.879 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.879 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.879 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.879 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:46.879 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.879 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.879 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.137 [2024-11-15 10:55:53.892934] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2bb42504-238a-4d81-a9e2-c022647ed06a '!=' 2bb42504-238a-4d81-a9e2-c022647ed06a ']' 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70869 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 70869 ']' 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 70869 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70869 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70869' 00:10:47.137 killing process with pid 70869 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 70869 00:10:47.137 [2024-11-15 10:55:53.977190] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:47.137 [2024-11-15 10:55:53.977287] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:47.137 10:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 70869 00:10:47.137 [2024-11-15 10:55:53.977393] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:47.138 [2024-11-15 10:55:53.977404] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:47.705 [2024-11-15 10:55:54.396338] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:48.646 10:55:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:48.646 00:10:48.646 real 0m5.737s 00:10:48.646 user 0m8.238s 00:10:48.646 sys 0m0.897s 00:10:48.646 10:55:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:48.646 10:55:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.646 ************************************ 00:10:48.646 END TEST raid_superblock_test 00:10:48.646 ************************************ 00:10:48.906 10:55:55 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:48.906 10:55:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:48.906 10:55:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:48.906 10:55:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:48.906 ************************************ 00:10:48.906 START TEST raid_read_error_test 00:10:48.906 ************************************ 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 read 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8FPgmn7aNx 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71135 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71135 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 71135 ']' 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.906 10:55:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:48.907 10:55:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.907 [2024-11-15 10:55:55.723997] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:10:48.907 [2024-11-15 10:55:55.724197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71135 ] 00:10:49.165 [2024-11-15 10:55:55.899496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.165 [2024-11-15 10:55:56.015412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.424 [2024-11-15 10:55:56.227445] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.424 [2024-11-15 10:55:56.227489] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.683 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:49.683 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:49.683 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:49.683 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:49.683 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.683 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.943 BaseBdev1_malloc 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.943 true 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.943 [2024-11-15 10:55:56.663236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:49.943 [2024-11-15 10:55:56.663295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.943 [2024-11-15 10:55:56.663330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:49.943 [2024-11-15 10:55:56.663341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.943 [2024-11-15 10:55:56.665669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.943 [2024-11-15 10:55:56.665707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:49.943 BaseBdev1 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.943 BaseBdev2_malloc 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.943 true 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.943 [2024-11-15 10:55:56.731455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:49.943 [2024-11-15 10:55:56.731566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.943 [2024-11-15 10:55:56.731590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:49.943 [2024-11-15 10:55:56.731604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.943 [2024-11-15 10:55:56.734192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.943 [2024-11-15 10:55:56.734239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:49.943 BaseBdev2 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.943 BaseBdev3_malloc 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.943 true 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.943 [2024-11-15 10:55:56.814180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:49.943 [2024-11-15 10:55:56.814235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.943 [2024-11-15 10:55:56.814254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:49.943 [2024-11-15 10:55:56.814264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.943 [2024-11-15 10:55:56.816441] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.943 [2024-11-15 10:55:56.816525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:49.943 BaseBdev3 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.943 BaseBdev4_malloc 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.943 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.201 true 00:10:50.201 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.201 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:50.201 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.201 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.201 [2024-11-15 10:55:56.882042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:50.201 [2024-11-15 10:55:56.882102] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.201 [2024-11-15 10:55:56.882140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:50.201 [2024-11-15 10:55:56.882151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.201 [2024-11-15 10:55:56.884409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.201 [2024-11-15 10:55:56.884451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:50.201 BaseBdev4 00:10:50.201 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.201 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:50.201 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.201 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.201 [2024-11-15 10:55:56.894081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:50.201 [2024-11-15 10:55:56.895934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:50.201 [2024-11-15 10:55:56.896072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:50.201 [2024-11-15 10:55:56.896152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:50.201 [2024-11-15 10:55:56.896411] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:50.201 [2024-11-15 10:55:56.896429] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:50.201 [2024-11-15 10:55:56.896704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:50.201 [2024-11-15 10:55:56.896875] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:50.201 [2024-11-15 10:55:56.896887] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:50.201 [2024-11-15 10:55:56.897084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.201 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.201 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:50.201 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.201 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.201 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.201 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.201 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.201 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.201 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.201 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.201 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.202 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.202 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.202 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.202 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.202 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.202 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.202 "name": "raid_bdev1", 00:10:50.202 "uuid": "58525477-a228-4f93-97e5-fffca0310650", 00:10:50.202 "strip_size_kb": 64, 00:10:50.202 "state": "online", 00:10:50.202 "raid_level": "raid0", 00:10:50.202 "superblock": true, 00:10:50.202 "num_base_bdevs": 4, 00:10:50.202 "num_base_bdevs_discovered": 4, 00:10:50.202 "num_base_bdevs_operational": 4, 00:10:50.202 "base_bdevs_list": [ 00:10:50.202 { 00:10:50.202 "name": "BaseBdev1", 00:10:50.202 "uuid": "fbe59d74-531f-56ef-964e-9f5e35d5a142", 00:10:50.202 "is_configured": true, 00:10:50.202 "data_offset": 2048, 00:10:50.202 "data_size": 63488 00:10:50.202 }, 00:10:50.202 { 00:10:50.202 "name": "BaseBdev2", 00:10:50.202 "uuid": "4788680b-9787-587b-bee2-756919a07698", 00:10:50.202 "is_configured": true, 00:10:50.202 "data_offset": 2048, 00:10:50.202 "data_size": 63488 00:10:50.202 }, 00:10:50.202 { 00:10:50.202 "name": "BaseBdev3", 00:10:50.202 "uuid": "22bd54d4-3345-5ca6-bc24-26f6f526c09d", 00:10:50.202 "is_configured": true, 00:10:50.202 "data_offset": 2048, 00:10:50.202 "data_size": 63488 00:10:50.202 }, 00:10:50.202 { 00:10:50.202 "name": "BaseBdev4", 00:10:50.202 "uuid": "b8755621-0a6c-57d3-a35a-4652235e3acc", 00:10:50.202 "is_configured": true, 00:10:50.202 "data_offset": 2048, 00:10:50.202 "data_size": 63488 00:10:50.202 } 00:10:50.202 ] 00:10:50.202 }' 00:10:50.202 10:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.202 10:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.460 10:55:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:50.460 10:55:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:50.719 [2024-11-15 10:55:57.486536] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:51.657 10:55:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:51.657 10:55:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.657 10:55:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.657 10:55:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.657 10:55:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:51.657 10:55:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:51.657 10:55:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:51.657 10:55:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:51.657 10:55:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.657 10:55:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.657 10:55:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.657 10:55:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.657 10:55:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.657 10:55:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.657 10:55:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.657 10:55:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.657 10:55:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.657 10:55:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.657 10:55:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.657 10:55:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.657 10:55:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.657 10:55:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.657 10:55:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.657 "name": "raid_bdev1", 00:10:51.657 "uuid": "58525477-a228-4f93-97e5-fffca0310650", 00:10:51.657 "strip_size_kb": 64, 00:10:51.657 "state": "online", 00:10:51.657 "raid_level": "raid0", 00:10:51.657 "superblock": true, 00:10:51.657 "num_base_bdevs": 4, 00:10:51.657 "num_base_bdevs_discovered": 4, 00:10:51.657 "num_base_bdevs_operational": 4, 00:10:51.657 "base_bdevs_list": [ 00:10:51.657 { 00:10:51.657 "name": "BaseBdev1", 00:10:51.657 "uuid": "fbe59d74-531f-56ef-964e-9f5e35d5a142", 00:10:51.657 "is_configured": true, 00:10:51.657 "data_offset": 2048, 00:10:51.657 "data_size": 63488 00:10:51.657 }, 00:10:51.657 { 00:10:51.657 "name": "BaseBdev2", 00:10:51.657 "uuid": "4788680b-9787-587b-bee2-756919a07698", 00:10:51.657 "is_configured": true, 00:10:51.657 "data_offset": 2048, 00:10:51.657 "data_size": 63488 00:10:51.657 }, 00:10:51.657 { 00:10:51.657 "name": "BaseBdev3", 00:10:51.657 "uuid": "22bd54d4-3345-5ca6-bc24-26f6f526c09d", 00:10:51.657 "is_configured": true, 00:10:51.657 "data_offset": 2048, 00:10:51.657 "data_size": 63488 00:10:51.657 }, 00:10:51.657 { 00:10:51.657 "name": "BaseBdev4", 00:10:51.657 "uuid": "b8755621-0a6c-57d3-a35a-4652235e3acc", 00:10:51.657 "is_configured": true, 00:10:51.657 "data_offset": 2048, 00:10:51.657 "data_size": 63488 00:10:51.657 } 00:10:51.657 ] 00:10:51.657 }' 00:10:51.657 10:55:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.657 10:55:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.226 10:55:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:52.226 10:55:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.226 10:55:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.226 [2024-11-15 10:55:58.863236] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:52.226 [2024-11-15 10:55:58.863272] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:52.226 [2024-11-15 10:55:58.866235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:52.226 [2024-11-15 10:55:58.866296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.226 [2024-11-15 10:55:58.866357] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:52.226 [2024-11-15 10:55:58.866370] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:52.226 { 00:10:52.226 "results": [ 00:10:52.226 { 00:10:52.226 "job": "raid_bdev1", 00:10:52.226 "core_mask": "0x1", 00:10:52.226 "workload": "randrw", 00:10:52.226 "percentage": 50, 00:10:52.226 "status": "finished", 00:10:52.226 "queue_depth": 1, 00:10:52.226 "io_size": 131072, 00:10:52.226 "runtime": 1.377291, 00:10:52.226 "iops": 14497.29940876692, 00:10:52.226 "mibps": 1812.162426095865, 00:10:52.226 "io_failed": 1, 00:10:52.226 "io_timeout": 0, 00:10:52.226 "avg_latency_us": 95.79381648191692, 00:10:52.226 "min_latency_us": 27.94759825327511, 00:10:52.226 "max_latency_us": 1645.5545851528384 00:10:52.226 } 00:10:52.226 ], 00:10:52.226 "core_count": 1 00:10:52.226 } 00:10:52.226 10:55:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.226 10:55:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71135 00:10:52.226 10:55:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 71135 ']' 00:10:52.226 10:55:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 71135 00:10:52.226 10:55:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:10:52.226 10:55:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:52.226 10:55:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71135 00:10:52.226 10:55:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:52.226 killing process with pid 71135 00:10:52.226 10:55:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:52.226 10:55:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71135' 00:10:52.226 10:55:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 71135 00:10:52.226 [2024-11-15 10:55:58.910692] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:52.226 10:55:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 71135 00:10:52.485 [2024-11-15 10:55:59.273908] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:53.868 10:56:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8FPgmn7aNx 00:10:53.868 10:56:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:53.868 10:56:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:53.868 10:56:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:53.868 10:56:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:53.868 10:56:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:53.868 10:56:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:53.868 10:56:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:53.868 00:10:53.868 real 0m4.931s 00:10:53.868 user 0m5.889s 00:10:53.868 sys 0m0.585s 00:10:53.868 10:56:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:53.868 ************************************ 00:10:53.868 END TEST raid_read_error_test 00:10:53.868 ************************************ 00:10:53.868 10:56:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.868 10:56:00 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:53.868 10:56:00 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:53.868 10:56:00 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:53.868 10:56:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:53.868 ************************************ 00:10:53.868 START TEST raid_write_error_test 00:10:53.868 ************************************ 00:10:53.868 10:56:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 write 00:10:53.868 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:53.868 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:53.868 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:53.868 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:53.868 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.868 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:53.868 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:53.868 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.868 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:53.868 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:53.869 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.869 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:53.869 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:53.869 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.869 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:53.869 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:53.869 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.869 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:53.869 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:53.869 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:53.869 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:53.869 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:53.869 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:53.869 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:53.869 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:53.869 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:53.869 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:53.869 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:53.869 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qAcHvCdQ9E 00:10:53.869 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71286 00:10:53.869 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:53.869 10:56:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71286 00:10:53.869 10:56:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 71286 ']' 00:10:53.869 10:56:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.869 10:56:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:53.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.869 10:56:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.869 10:56:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:53.869 10:56:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.869 [2024-11-15 10:56:00.718612] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:10:53.869 [2024-11-15 10:56:00.718830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71286 ] 00:10:54.129 [2024-11-15 10:56:00.892811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.129 [2024-11-15 10:56:01.017794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.389 [2024-11-15 10:56:01.243251] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.389 [2024-11-15 10:56:01.243290] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.959 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.960 BaseBdev1_malloc 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.960 true 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.960 [2024-11-15 10:56:01.683526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:54.960 [2024-11-15 10:56:01.683650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.960 [2024-11-15 10:56:01.683675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:54.960 [2024-11-15 10:56:01.683698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.960 [2024-11-15 10:56:01.685927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.960 [2024-11-15 10:56:01.685972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:54.960 BaseBdev1 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.960 BaseBdev2_malloc 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.960 true 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.960 [2024-11-15 10:56:01.753300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:54.960 [2024-11-15 10:56:01.753371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.960 [2024-11-15 10:56:01.753390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:54.960 [2024-11-15 10:56:01.753401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.960 [2024-11-15 10:56:01.755806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.960 [2024-11-15 10:56:01.755903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:54.960 BaseBdev2 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.960 BaseBdev3_malloc 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.960 true 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.960 [2024-11-15 10:56:01.835324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:54.960 [2024-11-15 10:56:01.835389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.960 [2024-11-15 10:56:01.835411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:54.960 [2024-11-15 10:56:01.835423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.960 [2024-11-15 10:56:01.837599] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.960 [2024-11-15 10:56:01.837708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:54.960 BaseBdev3 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.960 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.220 BaseBdev4_malloc 00:10:55.220 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.220 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:55.220 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.220 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.220 true 00:10:55.220 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.220 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:55.220 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.220 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.220 [2024-11-15 10:56:01.905769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:55.220 [2024-11-15 10:56:01.905830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.220 [2024-11-15 10:56:01.905852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:55.220 [2024-11-15 10:56:01.905865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.220 [2024-11-15 10:56:01.908227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.220 [2024-11-15 10:56:01.908280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:55.220 BaseBdev4 00:10:55.220 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.220 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:55.220 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.220 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.220 [2024-11-15 10:56:01.917806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.220 [2024-11-15 10:56:01.919654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:55.220 [2024-11-15 10:56:01.919728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:55.220 [2024-11-15 10:56:01.919794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:55.220 [2024-11-15 10:56:01.920060] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:55.220 [2024-11-15 10:56:01.920080] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:55.220 [2024-11-15 10:56:01.920357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:55.220 [2024-11-15 10:56:01.920521] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:55.220 [2024-11-15 10:56:01.920533] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:55.220 [2024-11-15 10:56:01.920723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.220 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.220 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:55.220 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.220 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.220 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.220 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.220 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.220 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.220 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.220 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.220 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.220 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.220 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.220 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.220 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.220 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.220 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.220 "name": "raid_bdev1", 00:10:55.220 "uuid": "295d98db-6af2-43ba-a4d4-6ed8d00c9e8a", 00:10:55.220 "strip_size_kb": 64, 00:10:55.220 "state": "online", 00:10:55.220 "raid_level": "raid0", 00:10:55.220 "superblock": true, 00:10:55.220 "num_base_bdevs": 4, 00:10:55.220 "num_base_bdevs_discovered": 4, 00:10:55.220 "num_base_bdevs_operational": 4, 00:10:55.220 "base_bdevs_list": [ 00:10:55.220 { 00:10:55.220 "name": "BaseBdev1", 00:10:55.220 "uuid": "894ebcf3-7b78-5a8c-a358-efc2451a4a2b", 00:10:55.220 "is_configured": true, 00:10:55.220 "data_offset": 2048, 00:10:55.220 "data_size": 63488 00:10:55.220 }, 00:10:55.220 { 00:10:55.220 "name": "BaseBdev2", 00:10:55.220 "uuid": "cd4c83c1-45bb-5d8b-9d2e-e5dfa337b363", 00:10:55.220 "is_configured": true, 00:10:55.220 "data_offset": 2048, 00:10:55.220 "data_size": 63488 00:10:55.220 }, 00:10:55.220 { 00:10:55.220 "name": "BaseBdev3", 00:10:55.221 "uuid": "072f7a4d-c25b-5ebc-b76e-71adc43b7c00", 00:10:55.221 "is_configured": true, 00:10:55.221 "data_offset": 2048, 00:10:55.221 "data_size": 63488 00:10:55.221 }, 00:10:55.221 { 00:10:55.221 "name": "BaseBdev4", 00:10:55.221 "uuid": "d6b8b6f1-7ffd-590c-b606-1304621a1c5c", 00:10:55.221 "is_configured": true, 00:10:55.221 "data_offset": 2048, 00:10:55.221 "data_size": 63488 00:10:55.221 } 00:10:55.221 ] 00:10:55.221 }' 00:10:55.221 10:56:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.221 10:56:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.481 10:56:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:55.481 10:56:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:55.764 [2024-11-15 10:56:02.486240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:56.704 10:56:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:56.704 10:56:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.704 10:56:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.704 10:56:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.704 10:56:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:56.704 10:56:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:56.704 10:56:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:56.704 10:56:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:56.704 10:56:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.704 10:56:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.704 10:56:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.704 10:56:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.704 10:56:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.704 10:56:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.704 10:56:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.704 10:56:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.704 10:56:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.704 10:56:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.704 10:56:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.704 10:56:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.704 10:56:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.704 10:56:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.704 10:56:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.704 "name": "raid_bdev1", 00:10:56.704 "uuid": "295d98db-6af2-43ba-a4d4-6ed8d00c9e8a", 00:10:56.704 "strip_size_kb": 64, 00:10:56.704 "state": "online", 00:10:56.704 "raid_level": "raid0", 00:10:56.704 "superblock": true, 00:10:56.704 "num_base_bdevs": 4, 00:10:56.704 "num_base_bdevs_discovered": 4, 00:10:56.704 "num_base_bdevs_operational": 4, 00:10:56.704 "base_bdevs_list": [ 00:10:56.704 { 00:10:56.704 "name": "BaseBdev1", 00:10:56.704 "uuid": "894ebcf3-7b78-5a8c-a358-efc2451a4a2b", 00:10:56.704 "is_configured": true, 00:10:56.704 "data_offset": 2048, 00:10:56.704 "data_size": 63488 00:10:56.704 }, 00:10:56.704 { 00:10:56.704 "name": "BaseBdev2", 00:10:56.704 "uuid": "cd4c83c1-45bb-5d8b-9d2e-e5dfa337b363", 00:10:56.704 "is_configured": true, 00:10:56.704 "data_offset": 2048, 00:10:56.704 "data_size": 63488 00:10:56.704 }, 00:10:56.704 { 00:10:56.704 "name": "BaseBdev3", 00:10:56.704 "uuid": "072f7a4d-c25b-5ebc-b76e-71adc43b7c00", 00:10:56.704 "is_configured": true, 00:10:56.704 "data_offset": 2048, 00:10:56.704 "data_size": 63488 00:10:56.704 }, 00:10:56.704 { 00:10:56.704 "name": "BaseBdev4", 00:10:56.704 "uuid": "d6b8b6f1-7ffd-590c-b606-1304621a1c5c", 00:10:56.704 "is_configured": true, 00:10:56.704 "data_offset": 2048, 00:10:56.704 "data_size": 63488 00:10:56.704 } 00:10:56.704 ] 00:10:56.704 }' 00:10:56.704 10:56:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.704 10:56:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.964 10:56:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:56.964 10:56:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.964 10:56:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.964 [2024-11-15 10:56:03.883477] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:56.964 [2024-11-15 10:56:03.883522] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:56.964 [2024-11-15 10:56:03.886693] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.964 [2024-11-15 10:56:03.886773] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.964 [2024-11-15 10:56:03.886825] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:56.964 [2024-11-15 10:56:03.886838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:57.233 { 00:10:57.233 "results": [ 00:10:57.233 { 00:10:57.233 "job": "raid_bdev1", 00:10:57.233 "core_mask": "0x1", 00:10:57.233 "workload": "randrw", 00:10:57.233 "percentage": 50, 00:10:57.233 "status": "finished", 00:10:57.233 "queue_depth": 1, 00:10:57.233 "io_size": 131072, 00:10:57.233 "runtime": 1.397505, 00:10:57.233 "iops": 14443.597697324876, 00:10:57.233 "mibps": 1805.4497121656095, 00:10:57.233 "io_failed": 1, 00:10:57.233 "io_timeout": 0, 00:10:57.233 "avg_latency_us": 96.15787222498882, 00:10:57.233 "min_latency_us": 28.05938864628821, 00:10:57.233 "max_latency_us": 1645.5545851528384 00:10:57.233 } 00:10:57.233 ], 00:10:57.233 "core_count": 1 00:10:57.233 } 00:10:57.233 10:56:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.233 10:56:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71286 00:10:57.233 10:56:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 71286 ']' 00:10:57.233 10:56:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 71286 00:10:57.233 10:56:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:10:57.233 10:56:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:57.233 10:56:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71286 00:10:57.233 10:56:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:57.233 killing process with pid 71286 00:10:57.233 10:56:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:57.233 10:56:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71286' 00:10:57.233 10:56:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 71286 00:10:57.233 [2024-11-15 10:56:03.926330] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:57.233 10:56:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 71286 00:10:57.512 [2024-11-15 10:56:04.265193] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:58.890 10:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:58.890 10:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qAcHvCdQ9E 00:10:58.890 10:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:58.890 10:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:58.890 10:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:58.890 10:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:58.890 10:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:58.890 10:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:58.890 00:10:58.890 real 0m4.873s 00:10:58.890 user 0m5.819s 00:10:58.890 sys 0m0.585s 00:10:58.890 10:56:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:58.891 ************************************ 00:10:58.891 END TEST raid_write_error_test 00:10:58.891 ************************************ 00:10:58.891 10:56:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.891 10:56:05 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:58.891 10:56:05 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:58.891 10:56:05 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:58.891 10:56:05 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:58.891 10:56:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:58.891 ************************************ 00:10:58.891 START TEST raid_state_function_test 00:10:58.891 ************************************ 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 false 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71433 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71433' 00:10:58.891 Process raid pid: 71433 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71433 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 71433 ']' 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:58.891 10:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.891 [2024-11-15 10:56:05.657023] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:10:58.891 [2024-11-15 10:56:05.657237] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:59.150 [2024-11-15 10:56:05.835102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.150 [2024-11-15 10:56:05.952769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.409 [2024-11-15 10:56:06.179200] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.409 [2024-11-15 10:56:06.179357] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.668 10:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:59.669 10:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:10:59.669 10:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:59.669 10:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.669 10:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.669 [2024-11-15 10:56:06.533503] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:59.669 [2024-11-15 10:56:06.533560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:59.669 [2024-11-15 10:56:06.533570] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:59.669 [2024-11-15 10:56:06.533580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:59.669 [2024-11-15 10:56:06.533587] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:59.669 [2024-11-15 10:56:06.533596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:59.669 [2024-11-15 10:56:06.533602] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:59.669 [2024-11-15 10:56:06.533611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:59.669 10:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.669 10:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:59.669 10:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.669 10:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.669 10:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.669 10:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.669 10:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.669 10:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.669 10:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.669 10:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.669 10:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.669 10:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.669 10:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.669 10:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.669 10:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.669 10:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.669 10:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.669 "name": "Existed_Raid", 00:10:59.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.669 "strip_size_kb": 64, 00:10:59.669 "state": "configuring", 00:10:59.669 "raid_level": "concat", 00:10:59.669 "superblock": false, 00:10:59.669 "num_base_bdevs": 4, 00:10:59.669 "num_base_bdevs_discovered": 0, 00:10:59.669 "num_base_bdevs_operational": 4, 00:10:59.669 "base_bdevs_list": [ 00:10:59.669 { 00:10:59.669 "name": "BaseBdev1", 00:10:59.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.669 "is_configured": false, 00:10:59.669 "data_offset": 0, 00:10:59.669 "data_size": 0 00:10:59.669 }, 00:10:59.669 { 00:10:59.669 "name": "BaseBdev2", 00:10:59.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.669 "is_configured": false, 00:10:59.669 "data_offset": 0, 00:10:59.669 "data_size": 0 00:10:59.669 }, 00:10:59.669 { 00:10:59.669 "name": "BaseBdev3", 00:10:59.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.669 "is_configured": false, 00:10:59.669 "data_offset": 0, 00:10:59.669 "data_size": 0 00:10:59.669 }, 00:10:59.669 { 00:10:59.669 "name": "BaseBdev4", 00:10:59.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.669 "is_configured": false, 00:10:59.669 "data_offset": 0, 00:10:59.669 "data_size": 0 00:10:59.669 } 00:10:59.669 ] 00:10:59.669 }' 00:10:59.669 10:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.669 10:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.236 10:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:00.236 10:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.236 10:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.236 [2024-11-15 10:56:06.980676] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:00.236 [2024-11-15 10:56:06.980782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:00.236 10:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.236 10:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:00.236 10:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.236 10:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.236 [2024-11-15 10:56:06.988649] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:00.236 [2024-11-15 10:56:06.988735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:00.236 [2024-11-15 10:56:06.988783] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:00.236 [2024-11-15 10:56:06.988810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:00.236 [2024-11-15 10:56:06.988831] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:00.236 [2024-11-15 10:56:06.988855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:00.236 [2024-11-15 10:56:06.988876] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:00.236 [2024-11-15 10:56:06.988912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:00.236 10:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.236 10:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:00.236 10:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.236 10:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.236 [2024-11-15 10:56:07.035507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:00.236 BaseBdev1 00:11:00.236 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.236 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:00.236 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:00.236 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:00.236 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:00.236 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:00.236 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:00.236 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:00.236 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.236 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.236 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.236 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:00.236 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.236 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.236 [ 00:11:00.236 { 00:11:00.236 "name": "BaseBdev1", 00:11:00.236 "aliases": [ 00:11:00.236 "597318d3-f279-467b-b04a-a89548a3e80c" 00:11:00.236 ], 00:11:00.236 "product_name": "Malloc disk", 00:11:00.236 "block_size": 512, 00:11:00.236 "num_blocks": 65536, 00:11:00.237 "uuid": "597318d3-f279-467b-b04a-a89548a3e80c", 00:11:00.237 "assigned_rate_limits": { 00:11:00.237 "rw_ios_per_sec": 0, 00:11:00.237 "rw_mbytes_per_sec": 0, 00:11:00.237 "r_mbytes_per_sec": 0, 00:11:00.237 "w_mbytes_per_sec": 0 00:11:00.237 }, 00:11:00.237 "claimed": true, 00:11:00.237 "claim_type": "exclusive_write", 00:11:00.237 "zoned": false, 00:11:00.237 "supported_io_types": { 00:11:00.237 "read": true, 00:11:00.237 "write": true, 00:11:00.237 "unmap": true, 00:11:00.237 "flush": true, 00:11:00.237 "reset": true, 00:11:00.237 "nvme_admin": false, 00:11:00.237 "nvme_io": false, 00:11:00.237 "nvme_io_md": false, 00:11:00.237 "write_zeroes": true, 00:11:00.237 "zcopy": true, 00:11:00.237 "get_zone_info": false, 00:11:00.237 "zone_management": false, 00:11:00.237 "zone_append": false, 00:11:00.237 "compare": false, 00:11:00.237 "compare_and_write": false, 00:11:00.237 "abort": true, 00:11:00.237 "seek_hole": false, 00:11:00.237 "seek_data": false, 00:11:00.237 "copy": true, 00:11:00.237 "nvme_iov_md": false 00:11:00.237 }, 00:11:00.237 "memory_domains": [ 00:11:00.237 { 00:11:00.237 "dma_device_id": "system", 00:11:00.237 "dma_device_type": 1 00:11:00.237 }, 00:11:00.237 { 00:11:00.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.237 "dma_device_type": 2 00:11:00.237 } 00:11:00.237 ], 00:11:00.237 "driver_specific": {} 00:11:00.237 } 00:11:00.237 ] 00:11:00.237 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.237 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:00.237 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:00.237 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.237 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.237 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.237 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.237 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.237 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.237 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.237 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.237 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.237 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.237 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.237 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.237 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.237 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.237 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.237 "name": "Existed_Raid", 00:11:00.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.237 "strip_size_kb": 64, 00:11:00.237 "state": "configuring", 00:11:00.237 "raid_level": "concat", 00:11:00.237 "superblock": false, 00:11:00.237 "num_base_bdevs": 4, 00:11:00.237 "num_base_bdevs_discovered": 1, 00:11:00.237 "num_base_bdevs_operational": 4, 00:11:00.237 "base_bdevs_list": [ 00:11:00.237 { 00:11:00.237 "name": "BaseBdev1", 00:11:00.237 "uuid": "597318d3-f279-467b-b04a-a89548a3e80c", 00:11:00.237 "is_configured": true, 00:11:00.237 "data_offset": 0, 00:11:00.237 "data_size": 65536 00:11:00.237 }, 00:11:00.237 { 00:11:00.237 "name": "BaseBdev2", 00:11:00.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.237 "is_configured": false, 00:11:00.237 "data_offset": 0, 00:11:00.237 "data_size": 0 00:11:00.237 }, 00:11:00.237 { 00:11:00.237 "name": "BaseBdev3", 00:11:00.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.237 "is_configured": false, 00:11:00.237 "data_offset": 0, 00:11:00.237 "data_size": 0 00:11:00.237 }, 00:11:00.237 { 00:11:00.237 "name": "BaseBdev4", 00:11:00.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.237 "is_configured": false, 00:11:00.237 "data_offset": 0, 00:11:00.237 "data_size": 0 00:11:00.237 } 00:11:00.237 ] 00:11:00.237 }' 00:11:00.237 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.237 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.805 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:00.805 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.805 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.805 [2024-11-15 10:56:07.478792] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:00.805 [2024-11-15 10:56:07.478851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:00.805 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.805 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:00.805 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.805 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.805 [2024-11-15 10:56:07.486820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:00.805 [2024-11-15 10:56:07.488726] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:00.805 [2024-11-15 10:56:07.488768] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:00.805 [2024-11-15 10:56:07.488779] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:00.805 [2024-11-15 10:56:07.488790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:00.805 [2024-11-15 10:56:07.488798] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:00.805 [2024-11-15 10:56:07.488808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:00.805 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.805 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:00.805 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:00.805 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:00.805 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.805 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.805 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.805 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.805 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.805 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.805 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.805 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.805 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.805 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.805 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.805 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.805 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.805 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.805 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.805 "name": "Existed_Raid", 00:11:00.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.805 "strip_size_kb": 64, 00:11:00.805 "state": "configuring", 00:11:00.805 "raid_level": "concat", 00:11:00.805 "superblock": false, 00:11:00.805 "num_base_bdevs": 4, 00:11:00.805 "num_base_bdevs_discovered": 1, 00:11:00.805 "num_base_bdevs_operational": 4, 00:11:00.805 "base_bdevs_list": [ 00:11:00.805 { 00:11:00.805 "name": "BaseBdev1", 00:11:00.805 "uuid": "597318d3-f279-467b-b04a-a89548a3e80c", 00:11:00.805 "is_configured": true, 00:11:00.805 "data_offset": 0, 00:11:00.805 "data_size": 65536 00:11:00.805 }, 00:11:00.805 { 00:11:00.805 "name": "BaseBdev2", 00:11:00.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.805 "is_configured": false, 00:11:00.805 "data_offset": 0, 00:11:00.805 "data_size": 0 00:11:00.805 }, 00:11:00.805 { 00:11:00.805 "name": "BaseBdev3", 00:11:00.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.805 "is_configured": false, 00:11:00.805 "data_offset": 0, 00:11:00.805 "data_size": 0 00:11:00.805 }, 00:11:00.805 { 00:11:00.805 "name": "BaseBdev4", 00:11:00.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.805 "is_configured": false, 00:11:00.805 "data_offset": 0, 00:11:00.805 "data_size": 0 00:11:00.805 } 00:11:00.805 ] 00:11:00.805 }' 00:11:00.805 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.805 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.065 10:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:01.065 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.065 10:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.324 [2024-11-15 10:56:08.013176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:01.324 BaseBdev2 00:11:01.324 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.324 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:01.324 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:01.324 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:01.324 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:01.324 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:01.324 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:01.324 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:01.324 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.324 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.324 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.324 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:01.324 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.324 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.324 [ 00:11:01.324 { 00:11:01.324 "name": "BaseBdev2", 00:11:01.325 "aliases": [ 00:11:01.325 "39610895-f184-4940-bee4-38cd1f532635" 00:11:01.325 ], 00:11:01.325 "product_name": "Malloc disk", 00:11:01.325 "block_size": 512, 00:11:01.325 "num_blocks": 65536, 00:11:01.325 "uuid": "39610895-f184-4940-bee4-38cd1f532635", 00:11:01.325 "assigned_rate_limits": { 00:11:01.325 "rw_ios_per_sec": 0, 00:11:01.325 "rw_mbytes_per_sec": 0, 00:11:01.325 "r_mbytes_per_sec": 0, 00:11:01.325 "w_mbytes_per_sec": 0 00:11:01.325 }, 00:11:01.325 "claimed": true, 00:11:01.325 "claim_type": "exclusive_write", 00:11:01.325 "zoned": false, 00:11:01.325 "supported_io_types": { 00:11:01.325 "read": true, 00:11:01.325 "write": true, 00:11:01.325 "unmap": true, 00:11:01.325 "flush": true, 00:11:01.325 "reset": true, 00:11:01.325 "nvme_admin": false, 00:11:01.325 "nvme_io": false, 00:11:01.325 "nvme_io_md": false, 00:11:01.325 "write_zeroes": true, 00:11:01.325 "zcopy": true, 00:11:01.325 "get_zone_info": false, 00:11:01.325 "zone_management": false, 00:11:01.325 "zone_append": false, 00:11:01.325 "compare": false, 00:11:01.325 "compare_and_write": false, 00:11:01.325 "abort": true, 00:11:01.325 "seek_hole": false, 00:11:01.325 "seek_data": false, 00:11:01.325 "copy": true, 00:11:01.325 "nvme_iov_md": false 00:11:01.325 }, 00:11:01.325 "memory_domains": [ 00:11:01.325 { 00:11:01.325 "dma_device_id": "system", 00:11:01.325 "dma_device_type": 1 00:11:01.325 }, 00:11:01.325 { 00:11:01.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.325 "dma_device_type": 2 00:11:01.325 } 00:11:01.325 ], 00:11:01.325 "driver_specific": {} 00:11:01.325 } 00:11:01.325 ] 00:11:01.325 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.325 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:01.325 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:01.325 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:01.325 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:01.325 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.325 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.325 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.325 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.325 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.325 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.325 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.325 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.325 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.325 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.325 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.325 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.325 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.325 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.325 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.325 "name": "Existed_Raid", 00:11:01.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.325 "strip_size_kb": 64, 00:11:01.325 "state": "configuring", 00:11:01.325 "raid_level": "concat", 00:11:01.325 "superblock": false, 00:11:01.325 "num_base_bdevs": 4, 00:11:01.325 "num_base_bdevs_discovered": 2, 00:11:01.325 "num_base_bdevs_operational": 4, 00:11:01.325 "base_bdevs_list": [ 00:11:01.325 { 00:11:01.325 "name": "BaseBdev1", 00:11:01.325 "uuid": "597318d3-f279-467b-b04a-a89548a3e80c", 00:11:01.325 "is_configured": true, 00:11:01.325 "data_offset": 0, 00:11:01.325 "data_size": 65536 00:11:01.325 }, 00:11:01.325 { 00:11:01.325 "name": "BaseBdev2", 00:11:01.325 "uuid": "39610895-f184-4940-bee4-38cd1f532635", 00:11:01.325 "is_configured": true, 00:11:01.325 "data_offset": 0, 00:11:01.325 "data_size": 65536 00:11:01.325 }, 00:11:01.325 { 00:11:01.325 "name": "BaseBdev3", 00:11:01.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.325 "is_configured": false, 00:11:01.325 "data_offset": 0, 00:11:01.325 "data_size": 0 00:11:01.325 }, 00:11:01.325 { 00:11:01.325 "name": "BaseBdev4", 00:11:01.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.325 "is_configured": false, 00:11:01.325 "data_offset": 0, 00:11:01.325 "data_size": 0 00:11:01.325 } 00:11:01.325 ] 00:11:01.325 }' 00:11:01.325 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.325 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.584 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:01.584 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.584 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.844 [2024-11-15 10:56:08.548288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:01.844 BaseBdev3 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.844 [ 00:11:01.844 { 00:11:01.844 "name": "BaseBdev3", 00:11:01.844 "aliases": [ 00:11:01.844 "78dc9206-09bd-4d45-ad21-b1ebd3804bf6" 00:11:01.844 ], 00:11:01.844 "product_name": "Malloc disk", 00:11:01.844 "block_size": 512, 00:11:01.844 "num_blocks": 65536, 00:11:01.844 "uuid": "78dc9206-09bd-4d45-ad21-b1ebd3804bf6", 00:11:01.844 "assigned_rate_limits": { 00:11:01.844 "rw_ios_per_sec": 0, 00:11:01.844 "rw_mbytes_per_sec": 0, 00:11:01.844 "r_mbytes_per_sec": 0, 00:11:01.844 "w_mbytes_per_sec": 0 00:11:01.844 }, 00:11:01.844 "claimed": true, 00:11:01.844 "claim_type": "exclusive_write", 00:11:01.844 "zoned": false, 00:11:01.844 "supported_io_types": { 00:11:01.844 "read": true, 00:11:01.844 "write": true, 00:11:01.844 "unmap": true, 00:11:01.844 "flush": true, 00:11:01.844 "reset": true, 00:11:01.844 "nvme_admin": false, 00:11:01.844 "nvme_io": false, 00:11:01.844 "nvme_io_md": false, 00:11:01.844 "write_zeroes": true, 00:11:01.844 "zcopy": true, 00:11:01.844 "get_zone_info": false, 00:11:01.844 "zone_management": false, 00:11:01.844 "zone_append": false, 00:11:01.844 "compare": false, 00:11:01.844 "compare_and_write": false, 00:11:01.844 "abort": true, 00:11:01.844 "seek_hole": false, 00:11:01.844 "seek_data": false, 00:11:01.844 "copy": true, 00:11:01.844 "nvme_iov_md": false 00:11:01.844 }, 00:11:01.844 "memory_domains": [ 00:11:01.844 { 00:11:01.844 "dma_device_id": "system", 00:11:01.844 "dma_device_type": 1 00:11:01.844 }, 00:11:01.844 { 00:11:01.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.844 "dma_device_type": 2 00:11:01.844 } 00:11:01.844 ], 00:11:01.844 "driver_specific": {} 00:11:01.844 } 00:11:01.844 ] 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.844 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.844 "name": "Existed_Raid", 00:11:01.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.844 "strip_size_kb": 64, 00:11:01.844 "state": "configuring", 00:11:01.844 "raid_level": "concat", 00:11:01.844 "superblock": false, 00:11:01.844 "num_base_bdevs": 4, 00:11:01.844 "num_base_bdevs_discovered": 3, 00:11:01.844 "num_base_bdevs_operational": 4, 00:11:01.844 "base_bdevs_list": [ 00:11:01.844 { 00:11:01.844 "name": "BaseBdev1", 00:11:01.844 "uuid": "597318d3-f279-467b-b04a-a89548a3e80c", 00:11:01.844 "is_configured": true, 00:11:01.844 "data_offset": 0, 00:11:01.844 "data_size": 65536 00:11:01.844 }, 00:11:01.844 { 00:11:01.844 "name": "BaseBdev2", 00:11:01.844 "uuid": "39610895-f184-4940-bee4-38cd1f532635", 00:11:01.844 "is_configured": true, 00:11:01.844 "data_offset": 0, 00:11:01.844 "data_size": 65536 00:11:01.844 }, 00:11:01.844 { 00:11:01.844 "name": "BaseBdev3", 00:11:01.844 "uuid": "78dc9206-09bd-4d45-ad21-b1ebd3804bf6", 00:11:01.844 "is_configured": true, 00:11:01.844 "data_offset": 0, 00:11:01.844 "data_size": 65536 00:11:01.844 }, 00:11:01.844 { 00:11:01.844 "name": "BaseBdev4", 00:11:01.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.844 "is_configured": false, 00:11:01.844 "data_offset": 0, 00:11:01.844 "data_size": 0 00:11:01.844 } 00:11:01.844 ] 00:11:01.844 }' 00:11:01.845 10:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.845 10:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.413 [2024-11-15 10:56:09.089537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:02.413 [2024-11-15 10:56:09.089660] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:02.413 [2024-11-15 10:56:09.089690] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:02.413 [2024-11-15 10:56:09.090015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:02.413 [2024-11-15 10:56:09.090234] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:02.413 [2024-11-15 10:56:09.090288] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:02.413 [2024-11-15 10:56:09.090648] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.413 BaseBdev4 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.413 [ 00:11:02.413 { 00:11:02.413 "name": "BaseBdev4", 00:11:02.413 "aliases": [ 00:11:02.413 "6e21219a-886d-4cf1-9308-7f28e40733ad" 00:11:02.413 ], 00:11:02.413 "product_name": "Malloc disk", 00:11:02.413 "block_size": 512, 00:11:02.413 "num_blocks": 65536, 00:11:02.413 "uuid": "6e21219a-886d-4cf1-9308-7f28e40733ad", 00:11:02.413 "assigned_rate_limits": { 00:11:02.413 "rw_ios_per_sec": 0, 00:11:02.413 "rw_mbytes_per_sec": 0, 00:11:02.413 "r_mbytes_per_sec": 0, 00:11:02.413 "w_mbytes_per_sec": 0 00:11:02.413 }, 00:11:02.413 "claimed": true, 00:11:02.413 "claim_type": "exclusive_write", 00:11:02.413 "zoned": false, 00:11:02.413 "supported_io_types": { 00:11:02.413 "read": true, 00:11:02.413 "write": true, 00:11:02.413 "unmap": true, 00:11:02.413 "flush": true, 00:11:02.413 "reset": true, 00:11:02.413 "nvme_admin": false, 00:11:02.413 "nvme_io": false, 00:11:02.413 "nvme_io_md": false, 00:11:02.413 "write_zeroes": true, 00:11:02.413 "zcopy": true, 00:11:02.413 "get_zone_info": false, 00:11:02.413 "zone_management": false, 00:11:02.413 "zone_append": false, 00:11:02.413 "compare": false, 00:11:02.413 "compare_and_write": false, 00:11:02.413 "abort": true, 00:11:02.413 "seek_hole": false, 00:11:02.413 "seek_data": false, 00:11:02.413 "copy": true, 00:11:02.413 "nvme_iov_md": false 00:11:02.413 }, 00:11:02.413 "memory_domains": [ 00:11:02.413 { 00:11:02.413 "dma_device_id": "system", 00:11:02.413 "dma_device_type": 1 00:11:02.413 }, 00:11:02.413 { 00:11:02.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.413 "dma_device_type": 2 00:11:02.413 } 00:11:02.413 ], 00:11:02.413 "driver_specific": {} 00:11:02.413 } 00:11:02.413 ] 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.413 "name": "Existed_Raid", 00:11:02.413 "uuid": "06311cbf-5b1e-4a48-8617-81de02fb5e3d", 00:11:02.413 "strip_size_kb": 64, 00:11:02.413 "state": "online", 00:11:02.413 "raid_level": "concat", 00:11:02.413 "superblock": false, 00:11:02.413 "num_base_bdevs": 4, 00:11:02.413 "num_base_bdevs_discovered": 4, 00:11:02.413 "num_base_bdevs_operational": 4, 00:11:02.413 "base_bdevs_list": [ 00:11:02.413 { 00:11:02.413 "name": "BaseBdev1", 00:11:02.413 "uuid": "597318d3-f279-467b-b04a-a89548a3e80c", 00:11:02.413 "is_configured": true, 00:11:02.413 "data_offset": 0, 00:11:02.413 "data_size": 65536 00:11:02.413 }, 00:11:02.413 { 00:11:02.413 "name": "BaseBdev2", 00:11:02.413 "uuid": "39610895-f184-4940-bee4-38cd1f532635", 00:11:02.413 "is_configured": true, 00:11:02.413 "data_offset": 0, 00:11:02.413 "data_size": 65536 00:11:02.413 }, 00:11:02.413 { 00:11:02.413 "name": "BaseBdev3", 00:11:02.413 "uuid": "78dc9206-09bd-4d45-ad21-b1ebd3804bf6", 00:11:02.413 "is_configured": true, 00:11:02.413 "data_offset": 0, 00:11:02.413 "data_size": 65536 00:11:02.413 }, 00:11:02.413 { 00:11:02.413 "name": "BaseBdev4", 00:11:02.413 "uuid": "6e21219a-886d-4cf1-9308-7f28e40733ad", 00:11:02.413 "is_configured": true, 00:11:02.413 "data_offset": 0, 00:11:02.413 "data_size": 65536 00:11:02.413 } 00:11:02.413 ] 00:11:02.413 }' 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.413 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.672 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:02.672 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:02.672 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:02.672 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:02.672 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:02.672 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:02.672 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:02.672 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:02.672 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.672 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.672 [2024-11-15 10:56:09.569132] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.672 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.931 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:02.931 "name": "Existed_Raid", 00:11:02.931 "aliases": [ 00:11:02.931 "06311cbf-5b1e-4a48-8617-81de02fb5e3d" 00:11:02.931 ], 00:11:02.931 "product_name": "Raid Volume", 00:11:02.931 "block_size": 512, 00:11:02.931 "num_blocks": 262144, 00:11:02.931 "uuid": "06311cbf-5b1e-4a48-8617-81de02fb5e3d", 00:11:02.931 "assigned_rate_limits": { 00:11:02.931 "rw_ios_per_sec": 0, 00:11:02.931 "rw_mbytes_per_sec": 0, 00:11:02.931 "r_mbytes_per_sec": 0, 00:11:02.931 "w_mbytes_per_sec": 0 00:11:02.931 }, 00:11:02.931 "claimed": false, 00:11:02.931 "zoned": false, 00:11:02.931 "supported_io_types": { 00:11:02.931 "read": true, 00:11:02.931 "write": true, 00:11:02.931 "unmap": true, 00:11:02.931 "flush": true, 00:11:02.931 "reset": true, 00:11:02.931 "nvme_admin": false, 00:11:02.931 "nvme_io": false, 00:11:02.931 "nvme_io_md": false, 00:11:02.931 "write_zeroes": true, 00:11:02.931 "zcopy": false, 00:11:02.931 "get_zone_info": false, 00:11:02.931 "zone_management": false, 00:11:02.931 "zone_append": false, 00:11:02.931 "compare": false, 00:11:02.931 "compare_and_write": false, 00:11:02.931 "abort": false, 00:11:02.931 "seek_hole": false, 00:11:02.931 "seek_data": false, 00:11:02.931 "copy": false, 00:11:02.931 "nvme_iov_md": false 00:11:02.931 }, 00:11:02.931 "memory_domains": [ 00:11:02.931 { 00:11:02.931 "dma_device_id": "system", 00:11:02.931 "dma_device_type": 1 00:11:02.931 }, 00:11:02.931 { 00:11:02.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.931 "dma_device_type": 2 00:11:02.931 }, 00:11:02.931 { 00:11:02.931 "dma_device_id": "system", 00:11:02.931 "dma_device_type": 1 00:11:02.931 }, 00:11:02.931 { 00:11:02.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.931 "dma_device_type": 2 00:11:02.931 }, 00:11:02.931 { 00:11:02.931 "dma_device_id": "system", 00:11:02.931 "dma_device_type": 1 00:11:02.931 }, 00:11:02.931 { 00:11:02.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.931 "dma_device_type": 2 00:11:02.931 }, 00:11:02.931 { 00:11:02.931 "dma_device_id": "system", 00:11:02.932 "dma_device_type": 1 00:11:02.932 }, 00:11:02.932 { 00:11:02.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.932 "dma_device_type": 2 00:11:02.932 } 00:11:02.932 ], 00:11:02.932 "driver_specific": { 00:11:02.932 "raid": { 00:11:02.932 "uuid": "06311cbf-5b1e-4a48-8617-81de02fb5e3d", 00:11:02.932 "strip_size_kb": 64, 00:11:02.932 "state": "online", 00:11:02.932 "raid_level": "concat", 00:11:02.932 "superblock": false, 00:11:02.932 "num_base_bdevs": 4, 00:11:02.932 "num_base_bdevs_discovered": 4, 00:11:02.932 "num_base_bdevs_operational": 4, 00:11:02.932 "base_bdevs_list": [ 00:11:02.932 { 00:11:02.932 "name": "BaseBdev1", 00:11:02.932 "uuid": "597318d3-f279-467b-b04a-a89548a3e80c", 00:11:02.932 "is_configured": true, 00:11:02.932 "data_offset": 0, 00:11:02.932 "data_size": 65536 00:11:02.932 }, 00:11:02.932 { 00:11:02.932 "name": "BaseBdev2", 00:11:02.932 "uuid": "39610895-f184-4940-bee4-38cd1f532635", 00:11:02.932 "is_configured": true, 00:11:02.932 "data_offset": 0, 00:11:02.932 "data_size": 65536 00:11:02.932 }, 00:11:02.932 { 00:11:02.932 "name": "BaseBdev3", 00:11:02.932 "uuid": "78dc9206-09bd-4d45-ad21-b1ebd3804bf6", 00:11:02.932 "is_configured": true, 00:11:02.932 "data_offset": 0, 00:11:02.932 "data_size": 65536 00:11:02.932 }, 00:11:02.932 { 00:11:02.932 "name": "BaseBdev4", 00:11:02.932 "uuid": "6e21219a-886d-4cf1-9308-7f28e40733ad", 00:11:02.932 "is_configured": true, 00:11:02.932 "data_offset": 0, 00:11:02.932 "data_size": 65536 00:11:02.932 } 00:11:02.932 ] 00:11:02.932 } 00:11:02.932 } 00:11:02.932 }' 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:02.932 BaseBdev2 00:11:02.932 BaseBdev3 00:11:02.932 BaseBdev4' 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.932 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.194 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.194 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.194 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.194 10:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:03.194 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.194 10:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.194 [2024-11-15 10:56:09.904382] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:03.194 [2024-11-15 10:56:09.904496] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:03.194 [2024-11-15 10:56:09.904580] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:03.194 10:56:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.194 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:03.194 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:03.194 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:03.194 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:03.194 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:03.194 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:03.194 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.194 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:03.194 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.194 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.194 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:03.194 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.194 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.194 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.194 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.195 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.195 10:56:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.195 10:56:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.195 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.195 10:56:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.195 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.195 "name": "Existed_Raid", 00:11:03.195 "uuid": "06311cbf-5b1e-4a48-8617-81de02fb5e3d", 00:11:03.195 "strip_size_kb": 64, 00:11:03.195 "state": "offline", 00:11:03.195 "raid_level": "concat", 00:11:03.195 "superblock": false, 00:11:03.195 "num_base_bdevs": 4, 00:11:03.195 "num_base_bdevs_discovered": 3, 00:11:03.195 "num_base_bdevs_operational": 3, 00:11:03.195 "base_bdevs_list": [ 00:11:03.195 { 00:11:03.195 "name": null, 00:11:03.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.195 "is_configured": false, 00:11:03.195 "data_offset": 0, 00:11:03.195 "data_size": 65536 00:11:03.195 }, 00:11:03.195 { 00:11:03.195 "name": "BaseBdev2", 00:11:03.195 "uuid": "39610895-f184-4940-bee4-38cd1f532635", 00:11:03.195 "is_configured": true, 00:11:03.195 "data_offset": 0, 00:11:03.195 "data_size": 65536 00:11:03.195 }, 00:11:03.195 { 00:11:03.195 "name": "BaseBdev3", 00:11:03.195 "uuid": "78dc9206-09bd-4d45-ad21-b1ebd3804bf6", 00:11:03.195 "is_configured": true, 00:11:03.195 "data_offset": 0, 00:11:03.195 "data_size": 65536 00:11:03.195 }, 00:11:03.195 { 00:11:03.195 "name": "BaseBdev4", 00:11:03.195 "uuid": "6e21219a-886d-4cf1-9308-7f28e40733ad", 00:11:03.195 "is_configured": true, 00:11:03.195 "data_offset": 0, 00:11:03.195 "data_size": 65536 00:11:03.195 } 00:11:03.195 ] 00:11:03.195 }' 00:11:03.195 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.195 10:56:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.768 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:03.768 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:03.768 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:03.768 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.768 10:56:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.768 10:56:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.768 10:56:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.768 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:03.768 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:03.768 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:03.768 10:56:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.768 10:56:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.768 [2024-11-15 10:56:10.537118] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:03.768 10:56:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.768 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:03.768 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:03.768 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.768 10:56:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.768 10:56:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.768 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:03.768 10:56:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.768 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:03.768 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:03.768 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:03.768 10:56:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.768 10:56:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.027 [2024-11-15 10:56:10.696223] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:04.027 10:56:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.027 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:04.027 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:04.027 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.027 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:04.027 10:56:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.027 10:56:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.027 10:56:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.027 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:04.027 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:04.027 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:04.027 10:56:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.027 10:56:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.027 [2024-11-15 10:56:10.856360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:04.027 [2024-11-15 10:56:10.856421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:04.286 10:56:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.286 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:04.286 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:04.286 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.286 10:56:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.286 10:56:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.286 10:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:04.286 10:56:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.286 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:04.286 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:04.286 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:04.286 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:04.286 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:04.286 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:04.286 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.286 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.286 BaseBdev2 00:11:04.286 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.286 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:04.286 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:04.286 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:04.286 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:04.286 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:04.286 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.287 [ 00:11:04.287 { 00:11:04.287 "name": "BaseBdev2", 00:11:04.287 "aliases": [ 00:11:04.287 "674669c8-4347-42a7-b219-21dc95d930fa" 00:11:04.287 ], 00:11:04.287 "product_name": "Malloc disk", 00:11:04.287 "block_size": 512, 00:11:04.287 "num_blocks": 65536, 00:11:04.287 "uuid": "674669c8-4347-42a7-b219-21dc95d930fa", 00:11:04.287 "assigned_rate_limits": { 00:11:04.287 "rw_ios_per_sec": 0, 00:11:04.287 "rw_mbytes_per_sec": 0, 00:11:04.287 "r_mbytes_per_sec": 0, 00:11:04.287 "w_mbytes_per_sec": 0 00:11:04.287 }, 00:11:04.287 "claimed": false, 00:11:04.287 "zoned": false, 00:11:04.287 "supported_io_types": { 00:11:04.287 "read": true, 00:11:04.287 "write": true, 00:11:04.287 "unmap": true, 00:11:04.287 "flush": true, 00:11:04.287 "reset": true, 00:11:04.287 "nvme_admin": false, 00:11:04.287 "nvme_io": false, 00:11:04.287 "nvme_io_md": false, 00:11:04.287 "write_zeroes": true, 00:11:04.287 "zcopy": true, 00:11:04.287 "get_zone_info": false, 00:11:04.287 "zone_management": false, 00:11:04.287 "zone_append": false, 00:11:04.287 "compare": false, 00:11:04.287 "compare_and_write": false, 00:11:04.287 "abort": true, 00:11:04.287 "seek_hole": false, 00:11:04.287 "seek_data": false, 00:11:04.287 "copy": true, 00:11:04.287 "nvme_iov_md": false 00:11:04.287 }, 00:11:04.287 "memory_domains": [ 00:11:04.287 { 00:11:04.287 "dma_device_id": "system", 00:11:04.287 "dma_device_type": 1 00:11:04.287 }, 00:11:04.287 { 00:11:04.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.287 "dma_device_type": 2 00:11:04.287 } 00:11:04.287 ], 00:11:04.287 "driver_specific": {} 00:11:04.287 } 00:11:04.287 ] 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.287 BaseBdev3 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.287 [ 00:11:04.287 { 00:11:04.287 "name": "BaseBdev3", 00:11:04.287 "aliases": [ 00:11:04.287 "1410dac0-cc43-49e2-bb0f-50d0066630d5" 00:11:04.287 ], 00:11:04.287 "product_name": "Malloc disk", 00:11:04.287 "block_size": 512, 00:11:04.287 "num_blocks": 65536, 00:11:04.287 "uuid": "1410dac0-cc43-49e2-bb0f-50d0066630d5", 00:11:04.287 "assigned_rate_limits": { 00:11:04.287 "rw_ios_per_sec": 0, 00:11:04.287 "rw_mbytes_per_sec": 0, 00:11:04.287 "r_mbytes_per_sec": 0, 00:11:04.287 "w_mbytes_per_sec": 0 00:11:04.287 }, 00:11:04.287 "claimed": false, 00:11:04.287 "zoned": false, 00:11:04.287 "supported_io_types": { 00:11:04.287 "read": true, 00:11:04.287 "write": true, 00:11:04.287 "unmap": true, 00:11:04.287 "flush": true, 00:11:04.287 "reset": true, 00:11:04.287 "nvme_admin": false, 00:11:04.287 "nvme_io": false, 00:11:04.287 "nvme_io_md": false, 00:11:04.287 "write_zeroes": true, 00:11:04.287 "zcopy": true, 00:11:04.287 "get_zone_info": false, 00:11:04.287 "zone_management": false, 00:11:04.287 "zone_append": false, 00:11:04.287 "compare": false, 00:11:04.287 "compare_and_write": false, 00:11:04.287 "abort": true, 00:11:04.287 "seek_hole": false, 00:11:04.287 "seek_data": false, 00:11:04.287 "copy": true, 00:11:04.287 "nvme_iov_md": false 00:11:04.287 }, 00:11:04.287 "memory_domains": [ 00:11:04.287 { 00:11:04.287 "dma_device_id": "system", 00:11:04.287 "dma_device_type": 1 00:11:04.287 }, 00:11:04.287 { 00:11:04.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.287 "dma_device_type": 2 00:11:04.287 } 00:11:04.287 ], 00:11:04.287 "driver_specific": {} 00:11:04.287 } 00:11:04.287 ] 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.287 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.546 BaseBdev4 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.546 [ 00:11:04.546 { 00:11:04.546 "name": "BaseBdev4", 00:11:04.546 "aliases": [ 00:11:04.546 "1be37dbb-abfc-4aeb-abff-756fce50b163" 00:11:04.546 ], 00:11:04.546 "product_name": "Malloc disk", 00:11:04.546 "block_size": 512, 00:11:04.546 "num_blocks": 65536, 00:11:04.546 "uuid": "1be37dbb-abfc-4aeb-abff-756fce50b163", 00:11:04.546 "assigned_rate_limits": { 00:11:04.546 "rw_ios_per_sec": 0, 00:11:04.546 "rw_mbytes_per_sec": 0, 00:11:04.546 "r_mbytes_per_sec": 0, 00:11:04.546 "w_mbytes_per_sec": 0 00:11:04.546 }, 00:11:04.546 "claimed": false, 00:11:04.546 "zoned": false, 00:11:04.546 "supported_io_types": { 00:11:04.546 "read": true, 00:11:04.546 "write": true, 00:11:04.546 "unmap": true, 00:11:04.546 "flush": true, 00:11:04.546 "reset": true, 00:11:04.546 "nvme_admin": false, 00:11:04.546 "nvme_io": false, 00:11:04.546 "nvme_io_md": false, 00:11:04.546 "write_zeroes": true, 00:11:04.546 "zcopy": true, 00:11:04.546 "get_zone_info": false, 00:11:04.546 "zone_management": false, 00:11:04.546 "zone_append": false, 00:11:04.546 "compare": false, 00:11:04.546 "compare_and_write": false, 00:11:04.546 "abort": true, 00:11:04.546 "seek_hole": false, 00:11:04.546 "seek_data": false, 00:11:04.546 "copy": true, 00:11:04.546 "nvme_iov_md": false 00:11:04.546 }, 00:11:04.546 "memory_domains": [ 00:11:04.546 { 00:11:04.546 "dma_device_id": "system", 00:11:04.546 "dma_device_type": 1 00:11:04.546 }, 00:11:04.546 { 00:11:04.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.546 "dma_device_type": 2 00:11:04.546 } 00:11:04.546 ], 00:11:04.546 "driver_specific": {} 00:11:04.546 } 00:11:04.546 ] 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.546 [2024-11-15 10:56:11.262566] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:04.546 [2024-11-15 10:56:11.262684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:04.546 [2024-11-15 10:56:11.262756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:04.546 [2024-11-15 10:56:11.264815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:04.546 [2024-11-15 10:56:11.264925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.546 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.546 "name": "Existed_Raid", 00:11:04.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.546 "strip_size_kb": 64, 00:11:04.546 "state": "configuring", 00:11:04.546 "raid_level": "concat", 00:11:04.546 "superblock": false, 00:11:04.546 "num_base_bdevs": 4, 00:11:04.546 "num_base_bdevs_discovered": 3, 00:11:04.546 "num_base_bdevs_operational": 4, 00:11:04.546 "base_bdevs_list": [ 00:11:04.546 { 00:11:04.546 "name": "BaseBdev1", 00:11:04.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.546 "is_configured": false, 00:11:04.546 "data_offset": 0, 00:11:04.546 "data_size": 0 00:11:04.546 }, 00:11:04.546 { 00:11:04.546 "name": "BaseBdev2", 00:11:04.546 "uuid": "674669c8-4347-42a7-b219-21dc95d930fa", 00:11:04.546 "is_configured": true, 00:11:04.546 "data_offset": 0, 00:11:04.546 "data_size": 65536 00:11:04.546 }, 00:11:04.546 { 00:11:04.546 "name": "BaseBdev3", 00:11:04.546 "uuid": "1410dac0-cc43-49e2-bb0f-50d0066630d5", 00:11:04.546 "is_configured": true, 00:11:04.546 "data_offset": 0, 00:11:04.546 "data_size": 65536 00:11:04.546 }, 00:11:04.546 { 00:11:04.547 "name": "BaseBdev4", 00:11:04.547 "uuid": "1be37dbb-abfc-4aeb-abff-756fce50b163", 00:11:04.547 "is_configured": true, 00:11:04.547 "data_offset": 0, 00:11:04.547 "data_size": 65536 00:11:04.547 } 00:11:04.547 ] 00:11:04.547 }' 00:11:04.547 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.547 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.114 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:05.114 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.114 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.114 [2024-11-15 10:56:11.745771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:05.114 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.114 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:05.114 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.114 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.114 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.114 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.114 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.114 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.114 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.114 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.114 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.114 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.114 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.114 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.114 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.114 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.114 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.114 "name": "Existed_Raid", 00:11:05.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.114 "strip_size_kb": 64, 00:11:05.114 "state": "configuring", 00:11:05.114 "raid_level": "concat", 00:11:05.114 "superblock": false, 00:11:05.114 "num_base_bdevs": 4, 00:11:05.114 "num_base_bdevs_discovered": 2, 00:11:05.114 "num_base_bdevs_operational": 4, 00:11:05.114 "base_bdevs_list": [ 00:11:05.114 { 00:11:05.114 "name": "BaseBdev1", 00:11:05.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.114 "is_configured": false, 00:11:05.114 "data_offset": 0, 00:11:05.114 "data_size": 0 00:11:05.114 }, 00:11:05.114 { 00:11:05.114 "name": null, 00:11:05.114 "uuid": "674669c8-4347-42a7-b219-21dc95d930fa", 00:11:05.114 "is_configured": false, 00:11:05.114 "data_offset": 0, 00:11:05.114 "data_size": 65536 00:11:05.114 }, 00:11:05.114 { 00:11:05.114 "name": "BaseBdev3", 00:11:05.114 "uuid": "1410dac0-cc43-49e2-bb0f-50d0066630d5", 00:11:05.114 "is_configured": true, 00:11:05.114 "data_offset": 0, 00:11:05.114 "data_size": 65536 00:11:05.114 }, 00:11:05.114 { 00:11:05.114 "name": "BaseBdev4", 00:11:05.114 "uuid": "1be37dbb-abfc-4aeb-abff-756fce50b163", 00:11:05.114 "is_configured": true, 00:11:05.114 "data_offset": 0, 00:11:05.114 "data_size": 65536 00:11:05.114 } 00:11:05.114 ] 00:11:05.114 }' 00:11:05.114 10:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.114 10:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.372 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.372 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:05.372 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.372 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.372 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.372 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:05.372 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:05.372 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.372 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.372 [2024-11-15 10:56:12.276748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:05.372 BaseBdev1 00:11:05.372 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.372 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:05.372 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:05.372 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:05.372 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:05.372 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:05.372 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:05.372 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:05.372 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.372 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.372 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.373 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:05.373 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.373 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.632 [ 00:11:05.632 { 00:11:05.632 "name": "BaseBdev1", 00:11:05.632 "aliases": [ 00:11:05.632 "96d5ebe4-f7fc-4d04-9f9b-b6526b5a4161" 00:11:05.632 ], 00:11:05.632 "product_name": "Malloc disk", 00:11:05.632 "block_size": 512, 00:11:05.632 "num_blocks": 65536, 00:11:05.632 "uuid": "96d5ebe4-f7fc-4d04-9f9b-b6526b5a4161", 00:11:05.632 "assigned_rate_limits": { 00:11:05.632 "rw_ios_per_sec": 0, 00:11:05.632 "rw_mbytes_per_sec": 0, 00:11:05.632 "r_mbytes_per_sec": 0, 00:11:05.632 "w_mbytes_per_sec": 0 00:11:05.632 }, 00:11:05.632 "claimed": true, 00:11:05.632 "claim_type": "exclusive_write", 00:11:05.632 "zoned": false, 00:11:05.632 "supported_io_types": { 00:11:05.632 "read": true, 00:11:05.632 "write": true, 00:11:05.632 "unmap": true, 00:11:05.632 "flush": true, 00:11:05.632 "reset": true, 00:11:05.632 "nvme_admin": false, 00:11:05.632 "nvme_io": false, 00:11:05.632 "nvme_io_md": false, 00:11:05.632 "write_zeroes": true, 00:11:05.632 "zcopy": true, 00:11:05.632 "get_zone_info": false, 00:11:05.632 "zone_management": false, 00:11:05.632 "zone_append": false, 00:11:05.632 "compare": false, 00:11:05.632 "compare_and_write": false, 00:11:05.632 "abort": true, 00:11:05.632 "seek_hole": false, 00:11:05.632 "seek_data": false, 00:11:05.632 "copy": true, 00:11:05.632 "nvme_iov_md": false 00:11:05.632 }, 00:11:05.632 "memory_domains": [ 00:11:05.632 { 00:11:05.632 "dma_device_id": "system", 00:11:05.632 "dma_device_type": 1 00:11:05.632 }, 00:11:05.632 { 00:11:05.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.632 "dma_device_type": 2 00:11:05.632 } 00:11:05.632 ], 00:11:05.632 "driver_specific": {} 00:11:05.632 } 00:11:05.632 ] 00:11:05.632 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.632 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:05.632 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:05.632 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.632 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.632 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.632 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.632 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.632 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.632 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.632 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.632 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.632 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.632 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.632 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.632 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.632 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.632 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.632 "name": "Existed_Raid", 00:11:05.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.632 "strip_size_kb": 64, 00:11:05.632 "state": "configuring", 00:11:05.632 "raid_level": "concat", 00:11:05.632 "superblock": false, 00:11:05.632 "num_base_bdevs": 4, 00:11:05.632 "num_base_bdevs_discovered": 3, 00:11:05.632 "num_base_bdevs_operational": 4, 00:11:05.632 "base_bdevs_list": [ 00:11:05.632 { 00:11:05.632 "name": "BaseBdev1", 00:11:05.632 "uuid": "96d5ebe4-f7fc-4d04-9f9b-b6526b5a4161", 00:11:05.632 "is_configured": true, 00:11:05.632 "data_offset": 0, 00:11:05.632 "data_size": 65536 00:11:05.632 }, 00:11:05.632 { 00:11:05.632 "name": null, 00:11:05.632 "uuid": "674669c8-4347-42a7-b219-21dc95d930fa", 00:11:05.632 "is_configured": false, 00:11:05.632 "data_offset": 0, 00:11:05.632 "data_size": 65536 00:11:05.632 }, 00:11:05.632 { 00:11:05.632 "name": "BaseBdev3", 00:11:05.632 "uuid": "1410dac0-cc43-49e2-bb0f-50d0066630d5", 00:11:05.632 "is_configured": true, 00:11:05.632 "data_offset": 0, 00:11:05.632 "data_size": 65536 00:11:05.632 }, 00:11:05.632 { 00:11:05.632 "name": "BaseBdev4", 00:11:05.632 "uuid": "1be37dbb-abfc-4aeb-abff-756fce50b163", 00:11:05.632 "is_configured": true, 00:11:05.632 "data_offset": 0, 00:11:05.632 "data_size": 65536 00:11:05.632 } 00:11:05.632 ] 00:11:05.632 }' 00:11:05.632 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.632 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.892 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.892 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.892 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.892 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:05.892 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.892 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:05.892 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:05.892 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.892 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.892 [2024-11-15 10:56:12.788031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:05.892 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.892 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:05.892 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.892 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.892 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.892 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.892 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.892 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.892 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.892 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.892 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.892 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.892 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.892 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.892 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.892 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.152 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.152 "name": "Existed_Raid", 00:11:06.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.152 "strip_size_kb": 64, 00:11:06.152 "state": "configuring", 00:11:06.152 "raid_level": "concat", 00:11:06.152 "superblock": false, 00:11:06.152 "num_base_bdevs": 4, 00:11:06.152 "num_base_bdevs_discovered": 2, 00:11:06.152 "num_base_bdevs_operational": 4, 00:11:06.152 "base_bdevs_list": [ 00:11:06.152 { 00:11:06.152 "name": "BaseBdev1", 00:11:06.152 "uuid": "96d5ebe4-f7fc-4d04-9f9b-b6526b5a4161", 00:11:06.152 "is_configured": true, 00:11:06.152 "data_offset": 0, 00:11:06.152 "data_size": 65536 00:11:06.152 }, 00:11:06.152 { 00:11:06.152 "name": null, 00:11:06.152 "uuid": "674669c8-4347-42a7-b219-21dc95d930fa", 00:11:06.152 "is_configured": false, 00:11:06.152 "data_offset": 0, 00:11:06.152 "data_size": 65536 00:11:06.152 }, 00:11:06.152 { 00:11:06.152 "name": null, 00:11:06.152 "uuid": "1410dac0-cc43-49e2-bb0f-50d0066630d5", 00:11:06.152 "is_configured": false, 00:11:06.152 "data_offset": 0, 00:11:06.152 "data_size": 65536 00:11:06.152 }, 00:11:06.152 { 00:11:06.152 "name": "BaseBdev4", 00:11:06.152 "uuid": "1be37dbb-abfc-4aeb-abff-756fce50b163", 00:11:06.152 "is_configured": true, 00:11:06.152 "data_offset": 0, 00:11:06.152 "data_size": 65536 00:11:06.152 } 00:11:06.152 ] 00:11:06.152 }' 00:11:06.152 10:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.152 10:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.411 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.411 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:06.411 10:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.411 10:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.411 10:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.411 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:06.411 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:06.411 10:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.411 10:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.411 [2024-11-15 10:56:13.287160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:06.412 10:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.412 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:06.412 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.412 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.412 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.412 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.412 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.412 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.412 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.412 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.412 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.412 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.412 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.412 10:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.412 10:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.412 10:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.412 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.412 "name": "Existed_Raid", 00:11:06.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.412 "strip_size_kb": 64, 00:11:06.412 "state": "configuring", 00:11:06.412 "raid_level": "concat", 00:11:06.412 "superblock": false, 00:11:06.412 "num_base_bdevs": 4, 00:11:06.412 "num_base_bdevs_discovered": 3, 00:11:06.412 "num_base_bdevs_operational": 4, 00:11:06.412 "base_bdevs_list": [ 00:11:06.412 { 00:11:06.412 "name": "BaseBdev1", 00:11:06.412 "uuid": "96d5ebe4-f7fc-4d04-9f9b-b6526b5a4161", 00:11:06.412 "is_configured": true, 00:11:06.412 "data_offset": 0, 00:11:06.412 "data_size": 65536 00:11:06.412 }, 00:11:06.412 { 00:11:06.412 "name": null, 00:11:06.412 "uuid": "674669c8-4347-42a7-b219-21dc95d930fa", 00:11:06.412 "is_configured": false, 00:11:06.412 "data_offset": 0, 00:11:06.412 "data_size": 65536 00:11:06.412 }, 00:11:06.412 { 00:11:06.412 "name": "BaseBdev3", 00:11:06.412 "uuid": "1410dac0-cc43-49e2-bb0f-50d0066630d5", 00:11:06.412 "is_configured": true, 00:11:06.412 "data_offset": 0, 00:11:06.412 "data_size": 65536 00:11:06.412 }, 00:11:06.412 { 00:11:06.412 "name": "BaseBdev4", 00:11:06.412 "uuid": "1be37dbb-abfc-4aeb-abff-756fce50b163", 00:11:06.412 "is_configured": true, 00:11:06.412 "data_offset": 0, 00:11:06.412 "data_size": 65536 00:11:06.412 } 00:11:06.412 ] 00:11:06.412 }' 00:11:06.412 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.412 10:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.998 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.998 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:06.998 10:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.998 10:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.998 10:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.998 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:06.998 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:06.998 10:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.998 10:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.998 [2024-11-15 10:56:13.754403] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:06.998 10:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.998 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:06.998 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.998 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.998 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.998 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.998 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.998 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.998 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.998 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.998 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.998 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.998 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.998 10:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.998 10:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.998 10:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.998 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.998 "name": "Existed_Raid", 00:11:06.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.998 "strip_size_kb": 64, 00:11:06.998 "state": "configuring", 00:11:06.998 "raid_level": "concat", 00:11:06.998 "superblock": false, 00:11:06.998 "num_base_bdevs": 4, 00:11:06.998 "num_base_bdevs_discovered": 2, 00:11:06.998 "num_base_bdevs_operational": 4, 00:11:06.998 "base_bdevs_list": [ 00:11:06.998 { 00:11:06.998 "name": null, 00:11:06.998 "uuid": "96d5ebe4-f7fc-4d04-9f9b-b6526b5a4161", 00:11:06.998 "is_configured": false, 00:11:06.998 "data_offset": 0, 00:11:06.998 "data_size": 65536 00:11:06.998 }, 00:11:06.998 { 00:11:06.998 "name": null, 00:11:06.998 "uuid": "674669c8-4347-42a7-b219-21dc95d930fa", 00:11:06.998 "is_configured": false, 00:11:06.998 "data_offset": 0, 00:11:06.998 "data_size": 65536 00:11:06.998 }, 00:11:06.998 { 00:11:06.998 "name": "BaseBdev3", 00:11:06.998 "uuid": "1410dac0-cc43-49e2-bb0f-50d0066630d5", 00:11:06.998 "is_configured": true, 00:11:06.998 "data_offset": 0, 00:11:06.998 "data_size": 65536 00:11:06.998 }, 00:11:06.998 { 00:11:06.998 "name": "BaseBdev4", 00:11:06.998 "uuid": "1be37dbb-abfc-4aeb-abff-756fce50b163", 00:11:06.998 "is_configured": true, 00:11:06.998 "data_offset": 0, 00:11:06.998 "data_size": 65536 00:11:06.998 } 00:11:06.998 ] 00:11:06.998 }' 00:11:06.998 10:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.998 10:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.594 10:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.594 10:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:07.594 10:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.594 10:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.594 10:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.594 10:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:07.594 10:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:07.594 10:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.594 10:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.594 [2024-11-15 10:56:14.418415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:07.594 10:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.594 10:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:07.594 10:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.594 10:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.594 10:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.594 10:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.594 10:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.594 10:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.594 10:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.594 10:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.594 10:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.594 10:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.594 10:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.594 10:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.594 10:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.595 10:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.595 10:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.595 "name": "Existed_Raid", 00:11:07.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.595 "strip_size_kb": 64, 00:11:07.595 "state": "configuring", 00:11:07.595 "raid_level": "concat", 00:11:07.595 "superblock": false, 00:11:07.595 "num_base_bdevs": 4, 00:11:07.595 "num_base_bdevs_discovered": 3, 00:11:07.595 "num_base_bdevs_operational": 4, 00:11:07.595 "base_bdevs_list": [ 00:11:07.595 { 00:11:07.595 "name": null, 00:11:07.595 "uuid": "96d5ebe4-f7fc-4d04-9f9b-b6526b5a4161", 00:11:07.595 "is_configured": false, 00:11:07.595 "data_offset": 0, 00:11:07.595 "data_size": 65536 00:11:07.595 }, 00:11:07.595 { 00:11:07.595 "name": "BaseBdev2", 00:11:07.595 "uuid": "674669c8-4347-42a7-b219-21dc95d930fa", 00:11:07.595 "is_configured": true, 00:11:07.595 "data_offset": 0, 00:11:07.595 "data_size": 65536 00:11:07.595 }, 00:11:07.595 { 00:11:07.595 "name": "BaseBdev3", 00:11:07.595 "uuid": "1410dac0-cc43-49e2-bb0f-50d0066630d5", 00:11:07.595 "is_configured": true, 00:11:07.595 "data_offset": 0, 00:11:07.595 "data_size": 65536 00:11:07.595 }, 00:11:07.595 { 00:11:07.595 "name": "BaseBdev4", 00:11:07.595 "uuid": "1be37dbb-abfc-4aeb-abff-756fce50b163", 00:11:07.595 "is_configured": true, 00:11:07.595 "data_offset": 0, 00:11:07.595 "data_size": 65536 00:11:07.595 } 00:11:07.595 ] 00:11:07.595 }' 00:11:07.595 10:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.595 10:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.164 10:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.164 10:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:08.164 10:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.164 10:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.164 10:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.164 10:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:08.164 10:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.164 10:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.164 10:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.164 10:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:08.164 10:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.164 10:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 96d5ebe4-f7fc-4d04-9f9b-b6526b5a4161 00:11:08.164 10:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.164 10:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.164 [2024-11-15 10:56:15.032964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:08.164 [2024-11-15 10:56:15.033020] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:08.164 [2024-11-15 10:56:15.033028] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:08.164 [2024-11-15 10:56:15.033294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:08.164 [2024-11-15 10:56:15.033473] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:08.164 [2024-11-15 10:56:15.033493] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:08.164 [2024-11-15 10:56:15.033743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.164 NewBaseBdev 00:11:08.164 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.164 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:08.164 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:08.164 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:08.164 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:08.164 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:08.164 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:08.164 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:08.164 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.164 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.164 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.164 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:08.164 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.164 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.164 [ 00:11:08.164 { 00:11:08.164 "name": "NewBaseBdev", 00:11:08.164 "aliases": [ 00:11:08.164 "96d5ebe4-f7fc-4d04-9f9b-b6526b5a4161" 00:11:08.164 ], 00:11:08.164 "product_name": "Malloc disk", 00:11:08.164 "block_size": 512, 00:11:08.164 "num_blocks": 65536, 00:11:08.164 "uuid": "96d5ebe4-f7fc-4d04-9f9b-b6526b5a4161", 00:11:08.164 "assigned_rate_limits": { 00:11:08.164 "rw_ios_per_sec": 0, 00:11:08.164 "rw_mbytes_per_sec": 0, 00:11:08.164 "r_mbytes_per_sec": 0, 00:11:08.164 "w_mbytes_per_sec": 0 00:11:08.164 }, 00:11:08.164 "claimed": true, 00:11:08.164 "claim_type": "exclusive_write", 00:11:08.164 "zoned": false, 00:11:08.164 "supported_io_types": { 00:11:08.164 "read": true, 00:11:08.164 "write": true, 00:11:08.164 "unmap": true, 00:11:08.164 "flush": true, 00:11:08.164 "reset": true, 00:11:08.164 "nvme_admin": false, 00:11:08.164 "nvme_io": false, 00:11:08.164 "nvme_io_md": false, 00:11:08.164 "write_zeroes": true, 00:11:08.164 "zcopy": true, 00:11:08.164 "get_zone_info": false, 00:11:08.164 "zone_management": false, 00:11:08.164 "zone_append": false, 00:11:08.164 "compare": false, 00:11:08.164 "compare_and_write": false, 00:11:08.164 "abort": true, 00:11:08.164 "seek_hole": false, 00:11:08.165 "seek_data": false, 00:11:08.165 "copy": true, 00:11:08.165 "nvme_iov_md": false 00:11:08.165 }, 00:11:08.165 "memory_domains": [ 00:11:08.165 { 00:11:08.165 "dma_device_id": "system", 00:11:08.165 "dma_device_type": 1 00:11:08.165 }, 00:11:08.165 { 00:11:08.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.165 "dma_device_type": 2 00:11:08.165 } 00:11:08.165 ], 00:11:08.165 "driver_specific": {} 00:11:08.165 } 00:11:08.165 ] 00:11:08.165 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.165 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:08.165 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:08.165 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.165 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.165 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.165 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.165 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.165 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.165 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.165 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.165 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.165 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.165 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.165 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.165 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.426 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.426 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.426 "name": "Existed_Raid", 00:11:08.426 "uuid": "1197cf22-2032-4cbf-90c7-af0a60e54499", 00:11:08.426 "strip_size_kb": 64, 00:11:08.426 "state": "online", 00:11:08.426 "raid_level": "concat", 00:11:08.426 "superblock": false, 00:11:08.426 "num_base_bdevs": 4, 00:11:08.426 "num_base_bdevs_discovered": 4, 00:11:08.426 "num_base_bdevs_operational": 4, 00:11:08.426 "base_bdevs_list": [ 00:11:08.426 { 00:11:08.426 "name": "NewBaseBdev", 00:11:08.426 "uuid": "96d5ebe4-f7fc-4d04-9f9b-b6526b5a4161", 00:11:08.426 "is_configured": true, 00:11:08.426 "data_offset": 0, 00:11:08.426 "data_size": 65536 00:11:08.426 }, 00:11:08.426 { 00:11:08.426 "name": "BaseBdev2", 00:11:08.426 "uuid": "674669c8-4347-42a7-b219-21dc95d930fa", 00:11:08.426 "is_configured": true, 00:11:08.426 "data_offset": 0, 00:11:08.426 "data_size": 65536 00:11:08.426 }, 00:11:08.426 { 00:11:08.426 "name": "BaseBdev3", 00:11:08.426 "uuid": "1410dac0-cc43-49e2-bb0f-50d0066630d5", 00:11:08.426 "is_configured": true, 00:11:08.426 "data_offset": 0, 00:11:08.426 "data_size": 65536 00:11:08.426 }, 00:11:08.426 { 00:11:08.426 "name": "BaseBdev4", 00:11:08.426 "uuid": "1be37dbb-abfc-4aeb-abff-756fce50b163", 00:11:08.426 "is_configured": true, 00:11:08.426 "data_offset": 0, 00:11:08.426 "data_size": 65536 00:11:08.426 } 00:11:08.426 ] 00:11:08.426 }' 00:11:08.426 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.426 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.684 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:08.684 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:08.684 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:08.684 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:08.684 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:08.684 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:08.684 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:08.684 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.684 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.684 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:08.684 [2024-11-15 10:56:15.600431] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:08.944 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.944 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:08.944 "name": "Existed_Raid", 00:11:08.944 "aliases": [ 00:11:08.944 "1197cf22-2032-4cbf-90c7-af0a60e54499" 00:11:08.944 ], 00:11:08.944 "product_name": "Raid Volume", 00:11:08.944 "block_size": 512, 00:11:08.944 "num_blocks": 262144, 00:11:08.944 "uuid": "1197cf22-2032-4cbf-90c7-af0a60e54499", 00:11:08.944 "assigned_rate_limits": { 00:11:08.944 "rw_ios_per_sec": 0, 00:11:08.944 "rw_mbytes_per_sec": 0, 00:11:08.944 "r_mbytes_per_sec": 0, 00:11:08.944 "w_mbytes_per_sec": 0 00:11:08.944 }, 00:11:08.944 "claimed": false, 00:11:08.944 "zoned": false, 00:11:08.944 "supported_io_types": { 00:11:08.944 "read": true, 00:11:08.944 "write": true, 00:11:08.944 "unmap": true, 00:11:08.944 "flush": true, 00:11:08.944 "reset": true, 00:11:08.944 "nvme_admin": false, 00:11:08.944 "nvme_io": false, 00:11:08.944 "nvme_io_md": false, 00:11:08.944 "write_zeroes": true, 00:11:08.944 "zcopy": false, 00:11:08.944 "get_zone_info": false, 00:11:08.944 "zone_management": false, 00:11:08.944 "zone_append": false, 00:11:08.944 "compare": false, 00:11:08.944 "compare_and_write": false, 00:11:08.944 "abort": false, 00:11:08.944 "seek_hole": false, 00:11:08.944 "seek_data": false, 00:11:08.944 "copy": false, 00:11:08.944 "nvme_iov_md": false 00:11:08.944 }, 00:11:08.944 "memory_domains": [ 00:11:08.944 { 00:11:08.944 "dma_device_id": "system", 00:11:08.944 "dma_device_type": 1 00:11:08.944 }, 00:11:08.944 { 00:11:08.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.944 "dma_device_type": 2 00:11:08.944 }, 00:11:08.944 { 00:11:08.944 "dma_device_id": "system", 00:11:08.944 "dma_device_type": 1 00:11:08.944 }, 00:11:08.944 { 00:11:08.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.944 "dma_device_type": 2 00:11:08.944 }, 00:11:08.944 { 00:11:08.944 "dma_device_id": "system", 00:11:08.944 "dma_device_type": 1 00:11:08.944 }, 00:11:08.944 { 00:11:08.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.944 "dma_device_type": 2 00:11:08.944 }, 00:11:08.944 { 00:11:08.944 "dma_device_id": "system", 00:11:08.944 "dma_device_type": 1 00:11:08.944 }, 00:11:08.944 { 00:11:08.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.944 "dma_device_type": 2 00:11:08.944 } 00:11:08.944 ], 00:11:08.944 "driver_specific": { 00:11:08.944 "raid": { 00:11:08.944 "uuid": "1197cf22-2032-4cbf-90c7-af0a60e54499", 00:11:08.944 "strip_size_kb": 64, 00:11:08.944 "state": "online", 00:11:08.944 "raid_level": "concat", 00:11:08.944 "superblock": false, 00:11:08.944 "num_base_bdevs": 4, 00:11:08.944 "num_base_bdevs_discovered": 4, 00:11:08.944 "num_base_bdevs_operational": 4, 00:11:08.944 "base_bdevs_list": [ 00:11:08.944 { 00:11:08.944 "name": "NewBaseBdev", 00:11:08.944 "uuid": "96d5ebe4-f7fc-4d04-9f9b-b6526b5a4161", 00:11:08.944 "is_configured": true, 00:11:08.944 "data_offset": 0, 00:11:08.944 "data_size": 65536 00:11:08.944 }, 00:11:08.944 { 00:11:08.944 "name": "BaseBdev2", 00:11:08.944 "uuid": "674669c8-4347-42a7-b219-21dc95d930fa", 00:11:08.944 "is_configured": true, 00:11:08.944 "data_offset": 0, 00:11:08.944 "data_size": 65536 00:11:08.944 }, 00:11:08.944 { 00:11:08.944 "name": "BaseBdev3", 00:11:08.944 "uuid": "1410dac0-cc43-49e2-bb0f-50d0066630d5", 00:11:08.944 "is_configured": true, 00:11:08.944 "data_offset": 0, 00:11:08.944 "data_size": 65536 00:11:08.944 }, 00:11:08.944 { 00:11:08.944 "name": "BaseBdev4", 00:11:08.944 "uuid": "1be37dbb-abfc-4aeb-abff-756fce50b163", 00:11:08.944 "is_configured": true, 00:11:08.944 "data_offset": 0, 00:11:08.944 "data_size": 65536 00:11:08.944 } 00:11:08.944 ] 00:11:08.944 } 00:11:08.944 } 00:11:08.944 }' 00:11:08.944 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:08.944 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:08.944 BaseBdev2 00:11:08.944 BaseBdev3 00:11:08.944 BaseBdev4' 00:11:08.944 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.944 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:08.944 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.944 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:08.944 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.944 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.944 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.944 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.944 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.944 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.944 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.944 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.944 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:08.944 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.944 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.944 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.944 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.944 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.944 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.944 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:08.944 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.944 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.944 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.944 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.203 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.203 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.203 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.203 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.203 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:09.203 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.203 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.203 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.203 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.203 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.203 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:09.203 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.203 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.203 [2024-11-15 10:56:15.919609] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:09.203 [2024-11-15 10:56:15.919644] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:09.203 [2024-11-15 10:56:15.919733] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:09.203 [2024-11-15 10:56:15.919809] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:09.203 [2024-11-15 10:56:15.919821] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:09.203 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.203 10:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71433 00:11:09.203 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 71433 ']' 00:11:09.203 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 71433 00:11:09.203 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:11:09.203 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:09.204 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71433 00:11:09.204 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:09.204 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:09.204 killing process with pid 71433 00:11:09.204 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71433' 00:11:09.204 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 71433 00:11:09.204 [2024-11-15 10:56:15.967164] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:09.204 10:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 71433 00:11:09.769 [2024-11-15 10:56:16.392835] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:10.706 10:56:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:10.706 00:11:10.706 real 0m11.996s 00:11:10.706 user 0m19.025s 00:11:10.706 sys 0m2.149s 00:11:10.706 10:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:10.706 10:56:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.707 ************************************ 00:11:10.707 END TEST raid_state_function_test 00:11:10.707 ************************************ 00:11:10.707 10:56:17 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:10.707 10:56:17 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:10.707 10:56:17 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:10.707 10:56:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:10.707 ************************************ 00:11:10.707 START TEST raid_state_function_test_sb 00:11:10.707 ************************************ 00:11:10.707 10:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 true 00:11:10.707 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:10.707 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:10.707 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:10.707 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72105 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72105' 00:11:10.967 Process raid pid: 72105 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72105 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 72105 ']' 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:10.967 10:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.967 [2024-11-15 10:56:17.729929] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:11:10.967 [2024-11-15 10:56:17.730049] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.967 [2024-11-15 10:56:17.888940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.226 [2024-11-15 10:56:18.010095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.485 [2024-11-15 10:56:18.231089] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.485 [2024-11-15 10:56:18.231135] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.744 10:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:11.744 10:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:11:11.744 10:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:11.744 10:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.744 10:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.744 [2024-11-15 10:56:18.583289] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:11.744 [2024-11-15 10:56:18.583362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:11.744 [2024-11-15 10:56:18.583375] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:11.744 [2024-11-15 10:56:18.583386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:11.744 [2024-11-15 10:56:18.583394] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:11.744 [2024-11-15 10:56:18.583404] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:11.744 [2024-11-15 10:56:18.583411] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:11.744 [2024-11-15 10:56:18.583421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:11.744 10:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.744 10:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:11.744 10:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.744 10:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.744 10:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.744 10:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.744 10:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.744 10:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.744 10:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.744 10:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.744 10:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.744 10:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.744 10:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.744 10:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.744 10:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.744 10:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.744 10:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.744 "name": "Existed_Raid", 00:11:11.744 "uuid": "d917b7e1-b916-4e13-a971-3174aa4649f4", 00:11:11.744 "strip_size_kb": 64, 00:11:11.744 "state": "configuring", 00:11:11.744 "raid_level": "concat", 00:11:11.744 "superblock": true, 00:11:11.744 "num_base_bdevs": 4, 00:11:11.744 "num_base_bdevs_discovered": 0, 00:11:11.744 "num_base_bdevs_operational": 4, 00:11:11.744 "base_bdevs_list": [ 00:11:11.744 { 00:11:11.744 "name": "BaseBdev1", 00:11:11.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.744 "is_configured": false, 00:11:11.744 "data_offset": 0, 00:11:11.744 "data_size": 0 00:11:11.744 }, 00:11:11.744 { 00:11:11.745 "name": "BaseBdev2", 00:11:11.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.745 "is_configured": false, 00:11:11.745 "data_offset": 0, 00:11:11.745 "data_size": 0 00:11:11.745 }, 00:11:11.745 { 00:11:11.745 "name": "BaseBdev3", 00:11:11.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.745 "is_configured": false, 00:11:11.745 "data_offset": 0, 00:11:11.745 "data_size": 0 00:11:11.745 }, 00:11:11.745 { 00:11:11.745 "name": "BaseBdev4", 00:11:11.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.745 "is_configured": false, 00:11:11.745 "data_offset": 0, 00:11:11.745 "data_size": 0 00:11:11.745 } 00:11:11.745 ] 00:11:11.745 }' 00:11:11.745 10:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.745 10:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.314 [2024-11-15 10:56:19.078382] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:12.314 [2024-11-15 10:56:19.078422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.314 [2024-11-15 10:56:19.086371] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:12.314 [2024-11-15 10:56:19.086454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:12.314 [2024-11-15 10:56:19.086467] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:12.314 [2024-11-15 10:56:19.086494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:12.314 [2024-11-15 10:56:19.086500] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:12.314 [2024-11-15 10:56:19.086509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:12.314 [2024-11-15 10:56:19.086515] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:12.314 [2024-11-15 10:56:19.086524] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.314 [2024-11-15 10:56:19.131720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:12.314 BaseBdev1 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.314 [ 00:11:12.314 { 00:11:12.314 "name": "BaseBdev1", 00:11:12.314 "aliases": [ 00:11:12.314 "97cd3bf8-6b2c-46ea-b92d-7d736dd9bab4" 00:11:12.314 ], 00:11:12.314 "product_name": "Malloc disk", 00:11:12.314 "block_size": 512, 00:11:12.314 "num_blocks": 65536, 00:11:12.314 "uuid": "97cd3bf8-6b2c-46ea-b92d-7d736dd9bab4", 00:11:12.314 "assigned_rate_limits": { 00:11:12.314 "rw_ios_per_sec": 0, 00:11:12.314 "rw_mbytes_per_sec": 0, 00:11:12.314 "r_mbytes_per_sec": 0, 00:11:12.314 "w_mbytes_per_sec": 0 00:11:12.314 }, 00:11:12.314 "claimed": true, 00:11:12.314 "claim_type": "exclusive_write", 00:11:12.314 "zoned": false, 00:11:12.314 "supported_io_types": { 00:11:12.314 "read": true, 00:11:12.314 "write": true, 00:11:12.314 "unmap": true, 00:11:12.314 "flush": true, 00:11:12.314 "reset": true, 00:11:12.314 "nvme_admin": false, 00:11:12.314 "nvme_io": false, 00:11:12.314 "nvme_io_md": false, 00:11:12.314 "write_zeroes": true, 00:11:12.314 "zcopy": true, 00:11:12.314 "get_zone_info": false, 00:11:12.314 "zone_management": false, 00:11:12.314 "zone_append": false, 00:11:12.314 "compare": false, 00:11:12.314 "compare_and_write": false, 00:11:12.314 "abort": true, 00:11:12.314 "seek_hole": false, 00:11:12.314 "seek_data": false, 00:11:12.314 "copy": true, 00:11:12.314 "nvme_iov_md": false 00:11:12.314 }, 00:11:12.314 "memory_domains": [ 00:11:12.314 { 00:11:12.314 "dma_device_id": "system", 00:11:12.314 "dma_device_type": 1 00:11:12.314 }, 00:11:12.314 { 00:11:12.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.314 "dma_device_type": 2 00:11:12.314 } 00:11:12.314 ], 00:11:12.314 "driver_specific": {} 00:11:12.314 } 00:11:12.314 ] 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.314 "name": "Existed_Raid", 00:11:12.314 "uuid": "ef659baa-d1ff-4437-847d-8d8613601599", 00:11:12.314 "strip_size_kb": 64, 00:11:12.314 "state": "configuring", 00:11:12.314 "raid_level": "concat", 00:11:12.314 "superblock": true, 00:11:12.314 "num_base_bdevs": 4, 00:11:12.314 "num_base_bdevs_discovered": 1, 00:11:12.314 "num_base_bdevs_operational": 4, 00:11:12.314 "base_bdevs_list": [ 00:11:12.314 { 00:11:12.314 "name": "BaseBdev1", 00:11:12.314 "uuid": "97cd3bf8-6b2c-46ea-b92d-7d736dd9bab4", 00:11:12.314 "is_configured": true, 00:11:12.314 "data_offset": 2048, 00:11:12.314 "data_size": 63488 00:11:12.314 }, 00:11:12.314 { 00:11:12.314 "name": "BaseBdev2", 00:11:12.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.314 "is_configured": false, 00:11:12.314 "data_offset": 0, 00:11:12.314 "data_size": 0 00:11:12.314 }, 00:11:12.314 { 00:11:12.314 "name": "BaseBdev3", 00:11:12.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.314 "is_configured": false, 00:11:12.314 "data_offset": 0, 00:11:12.314 "data_size": 0 00:11:12.314 }, 00:11:12.314 { 00:11:12.314 "name": "BaseBdev4", 00:11:12.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.314 "is_configured": false, 00:11:12.314 "data_offset": 0, 00:11:12.314 "data_size": 0 00:11:12.314 } 00:11:12.314 ] 00:11:12.314 }' 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.314 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.884 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:12.884 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.884 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.884 [2024-11-15 10:56:19.602997] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:12.884 [2024-11-15 10:56:19.603056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:12.884 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.884 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:12.884 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.884 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.884 [2024-11-15 10:56:19.615035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:12.884 [2024-11-15 10:56:19.617062] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:12.884 [2024-11-15 10:56:19.617146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:12.884 [2024-11-15 10:56:19.617178] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:12.884 [2024-11-15 10:56:19.617220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:12.884 [2024-11-15 10:56:19.617252] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:12.884 [2024-11-15 10:56:19.617279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:12.884 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.884 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:12.884 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:12.884 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:12.884 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.884 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.884 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.884 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.884 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.884 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.884 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.884 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.884 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.884 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.884 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.884 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.884 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.884 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.884 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.884 "name": "Existed_Raid", 00:11:12.884 "uuid": "3a9c64bf-5218-491c-b025-492144463886", 00:11:12.884 "strip_size_kb": 64, 00:11:12.884 "state": "configuring", 00:11:12.884 "raid_level": "concat", 00:11:12.884 "superblock": true, 00:11:12.884 "num_base_bdevs": 4, 00:11:12.884 "num_base_bdevs_discovered": 1, 00:11:12.884 "num_base_bdevs_operational": 4, 00:11:12.884 "base_bdevs_list": [ 00:11:12.884 { 00:11:12.884 "name": "BaseBdev1", 00:11:12.884 "uuid": "97cd3bf8-6b2c-46ea-b92d-7d736dd9bab4", 00:11:12.884 "is_configured": true, 00:11:12.884 "data_offset": 2048, 00:11:12.884 "data_size": 63488 00:11:12.884 }, 00:11:12.885 { 00:11:12.885 "name": "BaseBdev2", 00:11:12.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.885 "is_configured": false, 00:11:12.885 "data_offset": 0, 00:11:12.885 "data_size": 0 00:11:12.885 }, 00:11:12.885 { 00:11:12.885 "name": "BaseBdev3", 00:11:12.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.885 "is_configured": false, 00:11:12.885 "data_offset": 0, 00:11:12.885 "data_size": 0 00:11:12.885 }, 00:11:12.885 { 00:11:12.885 "name": "BaseBdev4", 00:11:12.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.885 "is_configured": false, 00:11:12.885 "data_offset": 0, 00:11:12.885 "data_size": 0 00:11:12.885 } 00:11:12.885 ] 00:11:12.885 }' 00:11:12.885 10:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.885 10:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.454 [2024-11-15 10:56:20.140125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:13.454 BaseBdev2 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.454 [ 00:11:13.454 { 00:11:13.454 "name": "BaseBdev2", 00:11:13.454 "aliases": [ 00:11:13.454 "f0a35ab3-c992-4b3e-89b4-b106cb744086" 00:11:13.454 ], 00:11:13.454 "product_name": "Malloc disk", 00:11:13.454 "block_size": 512, 00:11:13.454 "num_blocks": 65536, 00:11:13.454 "uuid": "f0a35ab3-c992-4b3e-89b4-b106cb744086", 00:11:13.454 "assigned_rate_limits": { 00:11:13.454 "rw_ios_per_sec": 0, 00:11:13.454 "rw_mbytes_per_sec": 0, 00:11:13.454 "r_mbytes_per_sec": 0, 00:11:13.454 "w_mbytes_per_sec": 0 00:11:13.454 }, 00:11:13.454 "claimed": true, 00:11:13.454 "claim_type": "exclusive_write", 00:11:13.454 "zoned": false, 00:11:13.454 "supported_io_types": { 00:11:13.454 "read": true, 00:11:13.454 "write": true, 00:11:13.454 "unmap": true, 00:11:13.454 "flush": true, 00:11:13.454 "reset": true, 00:11:13.454 "nvme_admin": false, 00:11:13.454 "nvme_io": false, 00:11:13.454 "nvme_io_md": false, 00:11:13.454 "write_zeroes": true, 00:11:13.454 "zcopy": true, 00:11:13.454 "get_zone_info": false, 00:11:13.454 "zone_management": false, 00:11:13.454 "zone_append": false, 00:11:13.454 "compare": false, 00:11:13.454 "compare_and_write": false, 00:11:13.454 "abort": true, 00:11:13.454 "seek_hole": false, 00:11:13.454 "seek_data": false, 00:11:13.454 "copy": true, 00:11:13.454 "nvme_iov_md": false 00:11:13.454 }, 00:11:13.454 "memory_domains": [ 00:11:13.454 { 00:11:13.454 "dma_device_id": "system", 00:11:13.454 "dma_device_type": 1 00:11:13.454 }, 00:11:13.454 { 00:11:13.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.454 "dma_device_type": 2 00:11:13.454 } 00:11:13.454 ], 00:11:13.454 "driver_specific": {} 00:11:13.454 } 00:11:13.454 ] 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.454 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.454 "name": "Existed_Raid", 00:11:13.454 "uuid": "3a9c64bf-5218-491c-b025-492144463886", 00:11:13.454 "strip_size_kb": 64, 00:11:13.454 "state": "configuring", 00:11:13.454 "raid_level": "concat", 00:11:13.454 "superblock": true, 00:11:13.454 "num_base_bdevs": 4, 00:11:13.454 "num_base_bdevs_discovered": 2, 00:11:13.454 "num_base_bdevs_operational": 4, 00:11:13.454 "base_bdevs_list": [ 00:11:13.454 { 00:11:13.454 "name": "BaseBdev1", 00:11:13.454 "uuid": "97cd3bf8-6b2c-46ea-b92d-7d736dd9bab4", 00:11:13.454 "is_configured": true, 00:11:13.454 "data_offset": 2048, 00:11:13.454 "data_size": 63488 00:11:13.454 }, 00:11:13.455 { 00:11:13.455 "name": "BaseBdev2", 00:11:13.455 "uuid": "f0a35ab3-c992-4b3e-89b4-b106cb744086", 00:11:13.455 "is_configured": true, 00:11:13.455 "data_offset": 2048, 00:11:13.455 "data_size": 63488 00:11:13.455 }, 00:11:13.455 { 00:11:13.455 "name": "BaseBdev3", 00:11:13.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.455 "is_configured": false, 00:11:13.455 "data_offset": 0, 00:11:13.455 "data_size": 0 00:11:13.455 }, 00:11:13.455 { 00:11:13.455 "name": "BaseBdev4", 00:11:13.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.455 "is_configured": false, 00:11:13.455 "data_offset": 0, 00:11:13.455 "data_size": 0 00:11:13.455 } 00:11:13.455 ] 00:11:13.455 }' 00:11:13.455 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.455 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.022 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:14.022 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.022 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.022 [2024-11-15 10:56:20.720495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:14.022 BaseBdev3 00:11:14.022 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.022 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:14.022 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:14.022 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:14.022 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:14.022 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:14.022 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:14.022 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:14.022 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.022 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.022 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.022 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:14.022 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.022 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.022 [ 00:11:14.022 { 00:11:14.022 "name": "BaseBdev3", 00:11:14.022 "aliases": [ 00:11:14.022 "755aeec4-1f75-4074-8ce4-eeaa0698a2c8" 00:11:14.022 ], 00:11:14.022 "product_name": "Malloc disk", 00:11:14.022 "block_size": 512, 00:11:14.022 "num_blocks": 65536, 00:11:14.022 "uuid": "755aeec4-1f75-4074-8ce4-eeaa0698a2c8", 00:11:14.022 "assigned_rate_limits": { 00:11:14.022 "rw_ios_per_sec": 0, 00:11:14.022 "rw_mbytes_per_sec": 0, 00:11:14.022 "r_mbytes_per_sec": 0, 00:11:14.022 "w_mbytes_per_sec": 0 00:11:14.022 }, 00:11:14.022 "claimed": true, 00:11:14.022 "claim_type": "exclusive_write", 00:11:14.022 "zoned": false, 00:11:14.022 "supported_io_types": { 00:11:14.022 "read": true, 00:11:14.022 "write": true, 00:11:14.022 "unmap": true, 00:11:14.022 "flush": true, 00:11:14.022 "reset": true, 00:11:14.022 "nvme_admin": false, 00:11:14.022 "nvme_io": false, 00:11:14.022 "nvme_io_md": false, 00:11:14.022 "write_zeroes": true, 00:11:14.022 "zcopy": true, 00:11:14.022 "get_zone_info": false, 00:11:14.022 "zone_management": false, 00:11:14.022 "zone_append": false, 00:11:14.023 "compare": false, 00:11:14.023 "compare_and_write": false, 00:11:14.023 "abort": true, 00:11:14.023 "seek_hole": false, 00:11:14.023 "seek_data": false, 00:11:14.023 "copy": true, 00:11:14.023 "nvme_iov_md": false 00:11:14.023 }, 00:11:14.023 "memory_domains": [ 00:11:14.023 { 00:11:14.023 "dma_device_id": "system", 00:11:14.023 "dma_device_type": 1 00:11:14.023 }, 00:11:14.023 { 00:11:14.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.023 "dma_device_type": 2 00:11:14.023 } 00:11:14.023 ], 00:11:14.023 "driver_specific": {} 00:11:14.023 } 00:11:14.023 ] 00:11:14.023 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.023 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:14.023 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:14.023 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:14.023 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:14.023 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.023 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.023 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.023 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.023 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.023 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.023 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.023 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.023 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.023 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.023 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.023 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.023 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.023 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.023 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.023 "name": "Existed_Raid", 00:11:14.023 "uuid": "3a9c64bf-5218-491c-b025-492144463886", 00:11:14.023 "strip_size_kb": 64, 00:11:14.023 "state": "configuring", 00:11:14.023 "raid_level": "concat", 00:11:14.023 "superblock": true, 00:11:14.023 "num_base_bdevs": 4, 00:11:14.023 "num_base_bdevs_discovered": 3, 00:11:14.023 "num_base_bdevs_operational": 4, 00:11:14.023 "base_bdevs_list": [ 00:11:14.023 { 00:11:14.023 "name": "BaseBdev1", 00:11:14.023 "uuid": "97cd3bf8-6b2c-46ea-b92d-7d736dd9bab4", 00:11:14.023 "is_configured": true, 00:11:14.023 "data_offset": 2048, 00:11:14.023 "data_size": 63488 00:11:14.023 }, 00:11:14.023 { 00:11:14.023 "name": "BaseBdev2", 00:11:14.023 "uuid": "f0a35ab3-c992-4b3e-89b4-b106cb744086", 00:11:14.023 "is_configured": true, 00:11:14.023 "data_offset": 2048, 00:11:14.023 "data_size": 63488 00:11:14.023 }, 00:11:14.023 { 00:11:14.023 "name": "BaseBdev3", 00:11:14.023 "uuid": "755aeec4-1f75-4074-8ce4-eeaa0698a2c8", 00:11:14.023 "is_configured": true, 00:11:14.023 "data_offset": 2048, 00:11:14.023 "data_size": 63488 00:11:14.023 }, 00:11:14.023 { 00:11:14.023 "name": "BaseBdev4", 00:11:14.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.023 "is_configured": false, 00:11:14.023 "data_offset": 0, 00:11:14.023 "data_size": 0 00:11:14.023 } 00:11:14.023 ] 00:11:14.023 }' 00:11:14.023 10:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.023 10:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.340 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:14.340 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.340 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.600 [2024-11-15 10:56:21.250773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:14.600 [2024-11-15 10:56:21.251036] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:14.600 [2024-11-15 10:56:21.251052] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:14.600 [2024-11-15 10:56:21.251349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:14.600 [2024-11-15 10:56:21.251509] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:14.600 [2024-11-15 10:56:21.251523] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:14.600 BaseBdev4 00:11:14.600 [2024-11-15 10:56:21.251670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.600 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.600 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:14.600 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:14.600 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:14.600 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:14.600 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:14.600 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:14.600 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:14.600 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.600 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.600 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.600 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:14.600 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.600 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.600 [ 00:11:14.600 { 00:11:14.600 "name": "BaseBdev4", 00:11:14.600 "aliases": [ 00:11:14.601 "2d383ade-125e-4c81-9094-be2eae6fab1a" 00:11:14.601 ], 00:11:14.601 "product_name": "Malloc disk", 00:11:14.601 "block_size": 512, 00:11:14.601 "num_blocks": 65536, 00:11:14.601 "uuid": "2d383ade-125e-4c81-9094-be2eae6fab1a", 00:11:14.601 "assigned_rate_limits": { 00:11:14.601 "rw_ios_per_sec": 0, 00:11:14.601 "rw_mbytes_per_sec": 0, 00:11:14.601 "r_mbytes_per_sec": 0, 00:11:14.601 "w_mbytes_per_sec": 0 00:11:14.601 }, 00:11:14.601 "claimed": true, 00:11:14.601 "claim_type": "exclusive_write", 00:11:14.601 "zoned": false, 00:11:14.601 "supported_io_types": { 00:11:14.601 "read": true, 00:11:14.601 "write": true, 00:11:14.601 "unmap": true, 00:11:14.601 "flush": true, 00:11:14.601 "reset": true, 00:11:14.601 "nvme_admin": false, 00:11:14.601 "nvme_io": false, 00:11:14.601 "nvme_io_md": false, 00:11:14.601 "write_zeroes": true, 00:11:14.601 "zcopy": true, 00:11:14.601 "get_zone_info": false, 00:11:14.601 "zone_management": false, 00:11:14.601 "zone_append": false, 00:11:14.601 "compare": false, 00:11:14.601 "compare_and_write": false, 00:11:14.601 "abort": true, 00:11:14.601 "seek_hole": false, 00:11:14.601 "seek_data": false, 00:11:14.601 "copy": true, 00:11:14.601 "nvme_iov_md": false 00:11:14.601 }, 00:11:14.601 "memory_domains": [ 00:11:14.601 { 00:11:14.601 "dma_device_id": "system", 00:11:14.601 "dma_device_type": 1 00:11:14.601 }, 00:11:14.601 { 00:11:14.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.601 "dma_device_type": 2 00:11:14.601 } 00:11:14.601 ], 00:11:14.601 "driver_specific": {} 00:11:14.601 } 00:11:14.601 ] 00:11:14.601 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.601 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:14.601 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:14.601 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:14.601 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:14.601 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.601 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.601 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.601 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.601 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.601 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.601 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.601 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.601 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.601 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.601 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.601 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.601 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.601 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.601 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.601 "name": "Existed_Raid", 00:11:14.601 "uuid": "3a9c64bf-5218-491c-b025-492144463886", 00:11:14.601 "strip_size_kb": 64, 00:11:14.601 "state": "online", 00:11:14.601 "raid_level": "concat", 00:11:14.601 "superblock": true, 00:11:14.601 "num_base_bdevs": 4, 00:11:14.601 "num_base_bdevs_discovered": 4, 00:11:14.601 "num_base_bdevs_operational": 4, 00:11:14.601 "base_bdevs_list": [ 00:11:14.601 { 00:11:14.601 "name": "BaseBdev1", 00:11:14.601 "uuid": "97cd3bf8-6b2c-46ea-b92d-7d736dd9bab4", 00:11:14.601 "is_configured": true, 00:11:14.601 "data_offset": 2048, 00:11:14.601 "data_size": 63488 00:11:14.601 }, 00:11:14.601 { 00:11:14.601 "name": "BaseBdev2", 00:11:14.601 "uuid": "f0a35ab3-c992-4b3e-89b4-b106cb744086", 00:11:14.601 "is_configured": true, 00:11:14.601 "data_offset": 2048, 00:11:14.601 "data_size": 63488 00:11:14.601 }, 00:11:14.601 { 00:11:14.601 "name": "BaseBdev3", 00:11:14.601 "uuid": "755aeec4-1f75-4074-8ce4-eeaa0698a2c8", 00:11:14.601 "is_configured": true, 00:11:14.601 "data_offset": 2048, 00:11:14.601 "data_size": 63488 00:11:14.601 }, 00:11:14.601 { 00:11:14.601 "name": "BaseBdev4", 00:11:14.601 "uuid": "2d383ade-125e-4c81-9094-be2eae6fab1a", 00:11:14.601 "is_configured": true, 00:11:14.601 "data_offset": 2048, 00:11:14.601 "data_size": 63488 00:11:14.601 } 00:11:14.601 ] 00:11:14.601 }' 00:11:14.601 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.601 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.860 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:14.861 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:14.861 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:14.861 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:14.861 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:14.861 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:14.861 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:14.861 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:14.861 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.861 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.861 [2024-11-15 10:56:21.718508] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:14.861 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.861 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:14.861 "name": "Existed_Raid", 00:11:14.861 "aliases": [ 00:11:14.861 "3a9c64bf-5218-491c-b025-492144463886" 00:11:14.861 ], 00:11:14.861 "product_name": "Raid Volume", 00:11:14.861 "block_size": 512, 00:11:14.861 "num_blocks": 253952, 00:11:14.861 "uuid": "3a9c64bf-5218-491c-b025-492144463886", 00:11:14.861 "assigned_rate_limits": { 00:11:14.861 "rw_ios_per_sec": 0, 00:11:14.861 "rw_mbytes_per_sec": 0, 00:11:14.861 "r_mbytes_per_sec": 0, 00:11:14.861 "w_mbytes_per_sec": 0 00:11:14.861 }, 00:11:14.861 "claimed": false, 00:11:14.861 "zoned": false, 00:11:14.861 "supported_io_types": { 00:11:14.861 "read": true, 00:11:14.861 "write": true, 00:11:14.861 "unmap": true, 00:11:14.861 "flush": true, 00:11:14.861 "reset": true, 00:11:14.861 "nvme_admin": false, 00:11:14.861 "nvme_io": false, 00:11:14.861 "nvme_io_md": false, 00:11:14.861 "write_zeroes": true, 00:11:14.861 "zcopy": false, 00:11:14.861 "get_zone_info": false, 00:11:14.861 "zone_management": false, 00:11:14.861 "zone_append": false, 00:11:14.861 "compare": false, 00:11:14.861 "compare_and_write": false, 00:11:14.861 "abort": false, 00:11:14.861 "seek_hole": false, 00:11:14.861 "seek_data": false, 00:11:14.861 "copy": false, 00:11:14.861 "nvme_iov_md": false 00:11:14.861 }, 00:11:14.861 "memory_domains": [ 00:11:14.861 { 00:11:14.861 "dma_device_id": "system", 00:11:14.861 "dma_device_type": 1 00:11:14.861 }, 00:11:14.861 { 00:11:14.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.861 "dma_device_type": 2 00:11:14.861 }, 00:11:14.861 { 00:11:14.861 "dma_device_id": "system", 00:11:14.861 "dma_device_type": 1 00:11:14.861 }, 00:11:14.861 { 00:11:14.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.861 "dma_device_type": 2 00:11:14.861 }, 00:11:14.861 { 00:11:14.861 "dma_device_id": "system", 00:11:14.861 "dma_device_type": 1 00:11:14.861 }, 00:11:14.861 { 00:11:14.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.861 "dma_device_type": 2 00:11:14.861 }, 00:11:14.861 { 00:11:14.861 "dma_device_id": "system", 00:11:14.861 "dma_device_type": 1 00:11:14.861 }, 00:11:14.861 { 00:11:14.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.861 "dma_device_type": 2 00:11:14.861 } 00:11:14.861 ], 00:11:14.861 "driver_specific": { 00:11:14.861 "raid": { 00:11:14.861 "uuid": "3a9c64bf-5218-491c-b025-492144463886", 00:11:14.861 "strip_size_kb": 64, 00:11:14.861 "state": "online", 00:11:14.861 "raid_level": "concat", 00:11:14.861 "superblock": true, 00:11:14.861 "num_base_bdevs": 4, 00:11:14.861 "num_base_bdevs_discovered": 4, 00:11:14.861 "num_base_bdevs_operational": 4, 00:11:14.861 "base_bdevs_list": [ 00:11:14.861 { 00:11:14.861 "name": "BaseBdev1", 00:11:14.861 "uuid": "97cd3bf8-6b2c-46ea-b92d-7d736dd9bab4", 00:11:14.861 "is_configured": true, 00:11:14.861 "data_offset": 2048, 00:11:14.861 "data_size": 63488 00:11:14.861 }, 00:11:14.861 { 00:11:14.861 "name": "BaseBdev2", 00:11:14.861 "uuid": "f0a35ab3-c992-4b3e-89b4-b106cb744086", 00:11:14.861 "is_configured": true, 00:11:14.861 "data_offset": 2048, 00:11:14.861 "data_size": 63488 00:11:14.861 }, 00:11:14.861 { 00:11:14.861 "name": "BaseBdev3", 00:11:14.861 "uuid": "755aeec4-1f75-4074-8ce4-eeaa0698a2c8", 00:11:14.861 "is_configured": true, 00:11:14.861 "data_offset": 2048, 00:11:14.861 "data_size": 63488 00:11:14.861 }, 00:11:14.861 { 00:11:14.861 "name": "BaseBdev4", 00:11:14.861 "uuid": "2d383ade-125e-4c81-9094-be2eae6fab1a", 00:11:14.861 "is_configured": true, 00:11:14.861 "data_offset": 2048, 00:11:14.861 "data_size": 63488 00:11:14.861 } 00:11:14.861 ] 00:11:14.861 } 00:11:14.861 } 00:11:14.861 }' 00:11:14.861 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:15.120 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:15.120 BaseBdev2 00:11:15.120 BaseBdev3 00:11:15.120 BaseBdev4' 00:11:15.120 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.120 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:15.120 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.120 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.120 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:15.120 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.120 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.120 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.120 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.120 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.120 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.120 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.120 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:15.120 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.120 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.120 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.120 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.120 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.120 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.120 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:15.120 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.120 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.120 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.120 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.120 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.120 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.120 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.121 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.121 10:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:15.121 10:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.121 10:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.121 10:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.121 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.121 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.121 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:15.121 10:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.121 10:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.121 [2024-11-15 10:56:22.029626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:15.121 [2024-11-15 10:56:22.029658] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:15.121 [2024-11-15 10:56:22.029713] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:15.380 10:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.380 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:15.380 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:15.380 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:15.380 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:15.380 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:15.380 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:15.380 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.380 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:15.380 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.380 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.380 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:15.380 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.380 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.380 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.380 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.381 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.381 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.381 10:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.381 10:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.381 10:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.381 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.381 "name": "Existed_Raid", 00:11:15.381 "uuid": "3a9c64bf-5218-491c-b025-492144463886", 00:11:15.381 "strip_size_kb": 64, 00:11:15.381 "state": "offline", 00:11:15.381 "raid_level": "concat", 00:11:15.381 "superblock": true, 00:11:15.381 "num_base_bdevs": 4, 00:11:15.381 "num_base_bdevs_discovered": 3, 00:11:15.381 "num_base_bdevs_operational": 3, 00:11:15.381 "base_bdevs_list": [ 00:11:15.381 { 00:11:15.381 "name": null, 00:11:15.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.381 "is_configured": false, 00:11:15.381 "data_offset": 0, 00:11:15.381 "data_size": 63488 00:11:15.381 }, 00:11:15.381 { 00:11:15.381 "name": "BaseBdev2", 00:11:15.381 "uuid": "f0a35ab3-c992-4b3e-89b4-b106cb744086", 00:11:15.381 "is_configured": true, 00:11:15.381 "data_offset": 2048, 00:11:15.381 "data_size": 63488 00:11:15.381 }, 00:11:15.381 { 00:11:15.381 "name": "BaseBdev3", 00:11:15.381 "uuid": "755aeec4-1f75-4074-8ce4-eeaa0698a2c8", 00:11:15.381 "is_configured": true, 00:11:15.381 "data_offset": 2048, 00:11:15.381 "data_size": 63488 00:11:15.381 }, 00:11:15.381 { 00:11:15.381 "name": "BaseBdev4", 00:11:15.381 "uuid": "2d383ade-125e-4c81-9094-be2eae6fab1a", 00:11:15.381 "is_configured": true, 00:11:15.381 "data_offset": 2048, 00:11:15.381 "data_size": 63488 00:11:15.381 } 00:11:15.381 ] 00:11:15.381 }' 00:11:15.381 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.381 10:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.950 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:15.950 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:15.950 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.950 10:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.950 10:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.950 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:15.950 10:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.950 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:15.950 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:15.950 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:15.950 10:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.950 10:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.950 [2024-11-15 10:56:22.679866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:15.950 10:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.950 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:15.950 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:15.950 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.950 10:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.950 10:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.950 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:15.950 10:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.950 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:15.950 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:15.950 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:15.950 10:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.950 10:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.950 [2024-11-15 10:56:22.835078] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:16.210 10:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.210 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:16.210 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:16.210 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.210 10:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.210 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:16.210 10:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.210 10:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.210 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:16.210 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:16.210 10:56:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:16.210 10:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.210 10:56:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.210 [2024-11-15 10:56:22.995719] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:16.210 [2024-11-15 10:56:22.995773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:16.210 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.210 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:16.210 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:16.210 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.210 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:16.210 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.210 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.210 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.468 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:16.468 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:16.468 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:16.468 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:16.468 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:16.468 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:16.468 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.468 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.468 BaseBdev2 00:11:16.468 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.468 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:16.468 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:16.468 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:16.468 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:16.468 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:16.468 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:16.468 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:16.468 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.468 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.468 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.468 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:16.468 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.468 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.468 [ 00:11:16.468 { 00:11:16.468 "name": "BaseBdev2", 00:11:16.468 "aliases": [ 00:11:16.469 "8e649886-0b50-42be-8459-bae68452f140" 00:11:16.469 ], 00:11:16.469 "product_name": "Malloc disk", 00:11:16.469 "block_size": 512, 00:11:16.469 "num_blocks": 65536, 00:11:16.469 "uuid": "8e649886-0b50-42be-8459-bae68452f140", 00:11:16.469 "assigned_rate_limits": { 00:11:16.469 "rw_ios_per_sec": 0, 00:11:16.469 "rw_mbytes_per_sec": 0, 00:11:16.469 "r_mbytes_per_sec": 0, 00:11:16.469 "w_mbytes_per_sec": 0 00:11:16.469 }, 00:11:16.469 "claimed": false, 00:11:16.469 "zoned": false, 00:11:16.469 "supported_io_types": { 00:11:16.469 "read": true, 00:11:16.469 "write": true, 00:11:16.469 "unmap": true, 00:11:16.469 "flush": true, 00:11:16.469 "reset": true, 00:11:16.469 "nvme_admin": false, 00:11:16.469 "nvme_io": false, 00:11:16.469 "nvme_io_md": false, 00:11:16.469 "write_zeroes": true, 00:11:16.469 "zcopy": true, 00:11:16.469 "get_zone_info": false, 00:11:16.469 "zone_management": false, 00:11:16.469 "zone_append": false, 00:11:16.469 "compare": false, 00:11:16.469 "compare_and_write": false, 00:11:16.469 "abort": true, 00:11:16.469 "seek_hole": false, 00:11:16.469 "seek_data": false, 00:11:16.469 "copy": true, 00:11:16.469 "nvme_iov_md": false 00:11:16.469 }, 00:11:16.469 "memory_domains": [ 00:11:16.469 { 00:11:16.469 "dma_device_id": "system", 00:11:16.469 "dma_device_type": 1 00:11:16.469 }, 00:11:16.469 { 00:11:16.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.469 "dma_device_type": 2 00:11:16.469 } 00:11:16.469 ], 00:11:16.469 "driver_specific": {} 00:11:16.469 } 00:11:16.469 ] 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.469 BaseBdev3 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.469 [ 00:11:16.469 { 00:11:16.469 "name": "BaseBdev3", 00:11:16.469 "aliases": [ 00:11:16.469 "e9144f91-1337-4697-b169-a42fa8d8b5d8" 00:11:16.469 ], 00:11:16.469 "product_name": "Malloc disk", 00:11:16.469 "block_size": 512, 00:11:16.469 "num_blocks": 65536, 00:11:16.469 "uuid": "e9144f91-1337-4697-b169-a42fa8d8b5d8", 00:11:16.469 "assigned_rate_limits": { 00:11:16.469 "rw_ios_per_sec": 0, 00:11:16.469 "rw_mbytes_per_sec": 0, 00:11:16.469 "r_mbytes_per_sec": 0, 00:11:16.469 "w_mbytes_per_sec": 0 00:11:16.469 }, 00:11:16.469 "claimed": false, 00:11:16.469 "zoned": false, 00:11:16.469 "supported_io_types": { 00:11:16.469 "read": true, 00:11:16.469 "write": true, 00:11:16.469 "unmap": true, 00:11:16.469 "flush": true, 00:11:16.469 "reset": true, 00:11:16.469 "nvme_admin": false, 00:11:16.469 "nvme_io": false, 00:11:16.469 "nvme_io_md": false, 00:11:16.469 "write_zeroes": true, 00:11:16.469 "zcopy": true, 00:11:16.469 "get_zone_info": false, 00:11:16.469 "zone_management": false, 00:11:16.469 "zone_append": false, 00:11:16.469 "compare": false, 00:11:16.469 "compare_and_write": false, 00:11:16.469 "abort": true, 00:11:16.469 "seek_hole": false, 00:11:16.469 "seek_data": false, 00:11:16.469 "copy": true, 00:11:16.469 "nvme_iov_md": false 00:11:16.469 }, 00:11:16.469 "memory_domains": [ 00:11:16.469 { 00:11:16.469 "dma_device_id": "system", 00:11:16.469 "dma_device_type": 1 00:11:16.469 }, 00:11:16.469 { 00:11:16.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.469 "dma_device_type": 2 00:11:16.469 } 00:11:16.469 ], 00:11:16.469 "driver_specific": {} 00:11:16.469 } 00:11:16.469 ] 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.469 BaseBdev4 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.469 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.728 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.728 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:16.728 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.728 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.728 [ 00:11:16.728 { 00:11:16.728 "name": "BaseBdev4", 00:11:16.728 "aliases": [ 00:11:16.728 "d6c589c7-67e4-4c39-8253-8443c040d2ca" 00:11:16.728 ], 00:11:16.728 "product_name": "Malloc disk", 00:11:16.728 "block_size": 512, 00:11:16.728 "num_blocks": 65536, 00:11:16.728 "uuid": "d6c589c7-67e4-4c39-8253-8443c040d2ca", 00:11:16.728 "assigned_rate_limits": { 00:11:16.728 "rw_ios_per_sec": 0, 00:11:16.728 "rw_mbytes_per_sec": 0, 00:11:16.728 "r_mbytes_per_sec": 0, 00:11:16.728 "w_mbytes_per_sec": 0 00:11:16.728 }, 00:11:16.728 "claimed": false, 00:11:16.728 "zoned": false, 00:11:16.728 "supported_io_types": { 00:11:16.728 "read": true, 00:11:16.728 "write": true, 00:11:16.728 "unmap": true, 00:11:16.728 "flush": true, 00:11:16.728 "reset": true, 00:11:16.728 "nvme_admin": false, 00:11:16.728 "nvme_io": false, 00:11:16.728 "nvme_io_md": false, 00:11:16.728 "write_zeroes": true, 00:11:16.728 "zcopy": true, 00:11:16.728 "get_zone_info": false, 00:11:16.728 "zone_management": false, 00:11:16.728 "zone_append": false, 00:11:16.728 "compare": false, 00:11:16.728 "compare_and_write": false, 00:11:16.728 "abort": true, 00:11:16.728 "seek_hole": false, 00:11:16.728 "seek_data": false, 00:11:16.728 "copy": true, 00:11:16.728 "nvme_iov_md": false 00:11:16.728 }, 00:11:16.728 "memory_domains": [ 00:11:16.728 { 00:11:16.728 "dma_device_id": "system", 00:11:16.728 "dma_device_type": 1 00:11:16.728 }, 00:11:16.728 { 00:11:16.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.728 "dma_device_type": 2 00:11:16.728 } 00:11:16.728 ], 00:11:16.728 "driver_specific": {} 00:11:16.728 } 00:11:16.728 ] 00:11:16.728 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.728 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:16.728 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:16.728 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:16.728 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:16.728 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.728 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.728 [2024-11-15 10:56:23.429965] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:16.728 [2024-11-15 10:56:23.430067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:16.728 [2024-11-15 10:56:23.430103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:16.728 [2024-11-15 10:56:23.432281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:16.728 [2024-11-15 10:56:23.432363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:16.728 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.728 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:16.728 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.728 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.728 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.728 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.728 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.728 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.728 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.728 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.728 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.728 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.729 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.729 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.729 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.729 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.729 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.729 "name": "Existed_Raid", 00:11:16.729 "uuid": "12d6dbcc-e228-418c-a914-133d9032f870", 00:11:16.729 "strip_size_kb": 64, 00:11:16.729 "state": "configuring", 00:11:16.729 "raid_level": "concat", 00:11:16.729 "superblock": true, 00:11:16.729 "num_base_bdevs": 4, 00:11:16.729 "num_base_bdevs_discovered": 3, 00:11:16.729 "num_base_bdevs_operational": 4, 00:11:16.729 "base_bdevs_list": [ 00:11:16.729 { 00:11:16.729 "name": "BaseBdev1", 00:11:16.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.729 "is_configured": false, 00:11:16.729 "data_offset": 0, 00:11:16.729 "data_size": 0 00:11:16.729 }, 00:11:16.729 { 00:11:16.729 "name": "BaseBdev2", 00:11:16.729 "uuid": "8e649886-0b50-42be-8459-bae68452f140", 00:11:16.729 "is_configured": true, 00:11:16.729 "data_offset": 2048, 00:11:16.729 "data_size": 63488 00:11:16.729 }, 00:11:16.729 { 00:11:16.729 "name": "BaseBdev3", 00:11:16.729 "uuid": "e9144f91-1337-4697-b169-a42fa8d8b5d8", 00:11:16.729 "is_configured": true, 00:11:16.729 "data_offset": 2048, 00:11:16.729 "data_size": 63488 00:11:16.729 }, 00:11:16.729 { 00:11:16.729 "name": "BaseBdev4", 00:11:16.729 "uuid": "d6c589c7-67e4-4c39-8253-8443c040d2ca", 00:11:16.729 "is_configured": true, 00:11:16.729 "data_offset": 2048, 00:11:16.729 "data_size": 63488 00:11:16.729 } 00:11:16.729 ] 00:11:16.729 }' 00:11:16.729 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.729 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.987 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:16.987 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.987 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.987 [2024-11-15 10:56:23.877248] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:16.987 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.987 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:16.987 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.987 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.987 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.987 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.987 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.987 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.987 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.987 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.987 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.987 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.987 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.987 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.988 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.988 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.246 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.246 "name": "Existed_Raid", 00:11:17.246 "uuid": "12d6dbcc-e228-418c-a914-133d9032f870", 00:11:17.246 "strip_size_kb": 64, 00:11:17.246 "state": "configuring", 00:11:17.246 "raid_level": "concat", 00:11:17.246 "superblock": true, 00:11:17.246 "num_base_bdevs": 4, 00:11:17.246 "num_base_bdevs_discovered": 2, 00:11:17.246 "num_base_bdevs_operational": 4, 00:11:17.246 "base_bdevs_list": [ 00:11:17.246 { 00:11:17.246 "name": "BaseBdev1", 00:11:17.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.246 "is_configured": false, 00:11:17.246 "data_offset": 0, 00:11:17.246 "data_size": 0 00:11:17.246 }, 00:11:17.246 { 00:11:17.246 "name": null, 00:11:17.246 "uuid": "8e649886-0b50-42be-8459-bae68452f140", 00:11:17.246 "is_configured": false, 00:11:17.246 "data_offset": 0, 00:11:17.246 "data_size": 63488 00:11:17.246 }, 00:11:17.246 { 00:11:17.246 "name": "BaseBdev3", 00:11:17.246 "uuid": "e9144f91-1337-4697-b169-a42fa8d8b5d8", 00:11:17.246 "is_configured": true, 00:11:17.246 "data_offset": 2048, 00:11:17.246 "data_size": 63488 00:11:17.246 }, 00:11:17.246 { 00:11:17.246 "name": "BaseBdev4", 00:11:17.246 "uuid": "d6c589c7-67e4-4c39-8253-8443c040d2ca", 00:11:17.246 "is_configured": true, 00:11:17.246 "data_offset": 2048, 00:11:17.246 "data_size": 63488 00:11:17.246 } 00:11:17.246 ] 00:11:17.246 }' 00:11:17.246 10:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.246 10:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.506 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.506 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.506 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.506 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:17.506 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.506 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:17.506 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:17.506 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.506 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.506 [2024-11-15 10:56:24.403858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:17.506 BaseBdev1 00:11:17.506 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.506 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:17.506 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:17.506 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:17.506 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:17.506 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:17.506 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:17.506 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:17.506 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.506 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.506 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.506 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:17.506 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.506 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.506 [ 00:11:17.506 { 00:11:17.506 "name": "BaseBdev1", 00:11:17.506 "aliases": [ 00:11:17.506 "8649e65f-5d4b-4fb7-8212-6cba1b0746de" 00:11:17.506 ], 00:11:17.506 "product_name": "Malloc disk", 00:11:17.506 "block_size": 512, 00:11:17.506 "num_blocks": 65536, 00:11:17.506 "uuid": "8649e65f-5d4b-4fb7-8212-6cba1b0746de", 00:11:17.765 "assigned_rate_limits": { 00:11:17.765 "rw_ios_per_sec": 0, 00:11:17.765 "rw_mbytes_per_sec": 0, 00:11:17.765 "r_mbytes_per_sec": 0, 00:11:17.765 "w_mbytes_per_sec": 0 00:11:17.765 }, 00:11:17.765 "claimed": true, 00:11:17.765 "claim_type": "exclusive_write", 00:11:17.765 "zoned": false, 00:11:17.765 "supported_io_types": { 00:11:17.765 "read": true, 00:11:17.765 "write": true, 00:11:17.765 "unmap": true, 00:11:17.765 "flush": true, 00:11:17.765 "reset": true, 00:11:17.765 "nvme_admin": false, 00:11:17.765 "nvme_io": false, 00:11:17.765 "nvme_io_md": false, 00:11:17.765 "write_zeroes": true, 00:11:17.765 "zcopy": true, 00:11:17.765 "get_zone_info": false, 00:11:17.765 "zone_management": false, 00:11:17.765 "zone_append": false, 00:11:17.765 "compare": false, 00:11:17.765 "compare_and_write": false, 00:11:17.765 "abort": true, 00:11:17.765 "seek_hole": false, 00:11:17.765 "seek_data": false, 00:11:17.765 "copy": true, 00:11:17.765 "nvme_iov_md": false 00:11:17.765 }, 00:11:17.765 "memory_domains": [ 00:11:17.765 { 00:11:17.765 "dma_device_id": "system", 00:11:17.765 "dma_device_type": 1 00:11:17.765 }, 00:11:17.765 { 00:11:17.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.765 "dma_device_type": 2 00:11:17.765 } 00:11:17.765 ], 00:11:17.765 "driver_specific": {} 00:11:17.765 } 00:11:17.765 ] 00:11:17.765 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.765 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:17.765 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:17.765 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.765 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.765 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.765 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.765 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.765 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.766 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.766 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.766 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.766 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.766 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.766 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.766 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.766 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.766 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.766 "name": "Existed_Raid", 00:11:17.766 "uuid": "12d6dbcc-e228-418c-a914-133d9032f870", 00:11:17.766 "strip_size_kb": 64, 00:11:17.766 "state": "configuring", 00:11:17.766 "raid_level": "concat", 00:11:17.766 "superblock": true, 00:11:17.766 "num_base_bdevs": 4, 00:11:17.766 "num_base_bdevs_discovered": 3, 00:11:17.766 "num_base_bdevs_operational": 4, 00:11:17.766 "base_bdevs_list": [ 00:11:17.766 { 00:11:17.766 "name": "BaseBdev1", 00:11:17.766 "uuid": "8649e65f-5d4b-4fb7-8212-6cba1b0746de", 00:11:17.766 "is_configured": true, 00:11:17.766 "data_offset": 2048, 00:11:17.766 "data_size": 63488 00:11:17.766 }, 00:11:17.766 { 00:11:17.766 "name": null, 00:11:17.766 "uuid": "8e649886-0b50-42be-8459-bae68452f140", 00:11:17.766 "is_configured": false, 00:11:17.766 "data_offset": 0, 00:11:17.766 "data_size": 63488 00:11:17.766 }, 00:11:17.766 { 00:11:17.766 "name": "BaseBdev3", 00:11:17.766 "uuid": "e9144f91-1337-4697-b169-a42fa8d8b5d8", 00:11:17.766 "is_configured": true, 00:11:17.766 "data_offset": 2048, 00:11:17.766 "data_size": 63488 00:11:17.766 }, 00:11:17.766 { 00:11:17.766 "name": "BaseBdev4", 00:11:17.766 "uuid": "d6c589c7-67e4-4c39-8253-8443c040d2ca", 00:11:17.766 "is_configured": true, 00:11:17.766 "data_offset": 2048, 00:11:17.766 "data_size": 63488 00:11:17.766 } 00:11:17.766 ] 00:11:17.766 }' 00:11:17.766 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.766 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.024 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.024 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.024 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.024 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:18.025 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.025 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:18.025 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:18.025 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.025 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.025 [2024-11-15 10:56:24.943124] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:18.025 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.025 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:18.025 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.025 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.025 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.025 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.283 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.283 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.283 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.283 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.283 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.283 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.283 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.283 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.283 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.283 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.283 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.283 "name": "Existed_Raid", 00:11:18.283 "uuid": "12d6dbcc-e228-418c-a914-133d9032f870", 00:11:18.283 "strip_size_kb": 64, 00:11:18.283 "state": "configuring", 00:11:18.283 "raid_level": "concat", 00:11:18.283 "superblock": true, 00:11:18.283 "num_base_bdevs": 4, 00:11:18.283 "num_base_bdevs_discovered": 2, 00:11:18.283 "num_base_bdevs_operational": 4, 00:11:18.283 "base_bdevs_list": [ 00:11:18.283 { 00:11:18.283 "name": "BaseBdev1", 00:11:18.283 "uuid": "8649e65f-5d4b-4fb7-8212-6cba1b0746de", 00:11:18.283 "is_configured": true, 00:11:18.283 "data_offset": 2048, 00:11:18.283 "data_size": 63488 00:11:18.283 }, 00:11:18.283 { 00:11:18.283 "name": null, 00:11:18.283 "uuid": "8e649886-0b50-42be-8459-bae68452f140", 00:11:18.283 "is_configured": false, 00:11:18.283 "data_offset": 0, 00:11:18.283 "data_size": 63488 00:11:18.283 }, 00:11:18.283 { 00:11:18.283 "name": null, 00:11:18.283 "uuid": "e9144f91-1337-4697-b169-a42fa8d8b5d8", 00:11:18.283 "is_configured": false, 00:11:18.283 "data_offset": 0, 00:11:18.283 "data_size": 63488 00:11:18.283 }, 00:11:18.283 { 00:11:18.283 "name": "BaseBdev4", 00:11:18.283 "uuid": "d6c589c7-67e4-4c39-8253-8443c040d2ca", 00:11:18.283 "is_configured": true, 00:11:18.283 "data_offset": 2048, 00:11:18.283 "data_size": 63488 00:11:18.283 } 00:11:18.283 ] 00:11:18.283 }' 00:11:18.283 10:56:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.283 10:56:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.542 10:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.542 10:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:18.542 10:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.542 10:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.802 10:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.802 10:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:18.802 10:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:18.802 10:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.802 10:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.802 [2024-11-15 10:56:25.502176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:18.802 10:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.802 10:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:18.802 10:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.802 10:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.802 10:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.802 10:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.802 10:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.802 10:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.802 10:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.802 10:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.802 10:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.802 10:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.802 10:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.802 10:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.802 10:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.802 10:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.802 10:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.802 "name": "Existed_Raid", 00:11:18.802 "uuid": "12d6dbcc-e228-418c-a914-133d9032f870", 00:11:18.802 "strip_size_kb": 64, 00:11:18.802 "state": "configuring", 00:11:18.802 "raid_level": "concat", 00:11:18.802 "superblock": true, 00:11:18.802 "num_base_bdevs": 4, 00:11:18.802 "num_base_bdevs_discovered": 3, 00:11:18.802 "num_base_bdevs_operational": 4, 00:11:18.802 "base_bdevs_list": [ 00:11:18.802 { 00:11:18.802 "name": "BaseBdev1", 00:11:18.802 "uuid": "8649e65f-5d4b-4fb7-8212-6cba1b0746de", 00:11:18.802 "is_configured": true, 00:11:18.802 "data_offset": 2048, 00:11:18.802 "data_size": 63488 00:11:18.802 }, 00:11:18.802 { 00:11:18.802 "name": null, 00:11:18.802 "uuid": "8e649886-0b50-42be-8459-bae68452f140", 00:11:18.802 "is_configured": false, 00:11:18.802 "data_offset": 0, 00:11:18.802 "data_size": 63488 00:11:18.802 }, 00:11:18.802 { 00:11:18.802 "name": "BaseBdev3", 00:11:18.802 "uuid": "e9144f91-1337-4697-b169-a42fa8d8b5d8", 00:11:18.802 "is_configured": true, 00:11:18.802 "data_offset": 2048, 00:11:18.802 "data_size": 63488 00:11:18.802 }, 00:11:18.802 { 00:11:18.802 "name": "BaseBdev4", 00:11:18.802 "uuid": "d6c589c7-67e4-4c39-8253-8443c040d2ca", 00:11:18.802 "is_configured": true, 00:11:18.802 "data_offset": 2048, 00:11:18.802 "data_size": 63488 00:11:18.802 } 00:11:18.802 ] 00:11:18.802 }' 00:11:18.802 10:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.802 10:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.371 10:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:19.371 10:56:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.371 10:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.371 10:56:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.371 10:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.371 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:19.371 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:19.371 10:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.371 10:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.371 [2024-11-15 10:56:26.049366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:19.371 10:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.371 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:19.372 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.372 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.372 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.372 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.372 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.372 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.372 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.372 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.372 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.372 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.372 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.372 10:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.372 10:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.372 10:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.372 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.372 "name": "Existed_Raid", 00:11:19.372 "uuid": "12d6dbcc-e228-418c-a914-133d9032f870", 00:11:19.372 "strip_size_kb": 64, 00:11:19.372 "state": "configuring", 00:11:19.372 "raid_level": "concat", 00:11:19.372 "superblock": true, 00:11:19.372 "num_base_bdevs": 4, 00:11:19.372 "num_base_bdevs_discovered": 2, 00:11:19.372 "num_base_bdevs_operational": 4, 00:11:19.372 "base_bdevs_list": [ 00:11:19.372 { 00:11:19.372 "name": null, 00:11:19.372 "uuid": "8649e65f-5d4b-4fb7-8212-6cba1b0746de", 00:11:19.372 "is_configured": false, 00:11:19.372 "data_offset": 0, 00:11:19.372 "data_size": 63488 00:11:19.372 }, 00:11:19.372 { 00:11:19.372 "name": null, 00:11:19.372 "uuid": "8e649886-0b50-42be-8459-bae68452f140", 00:11:19.372 "is_configured": false, 00:11:19.372 "data_offset": 0, 00:11:19.372 "data_size": 63488 00:11:19.372 }, 00:11:19.372 { 00:11:19.372 "name": "BaseBdev3", 00:11:19.372 "uuid": "e9144f91-1337-4697-b169-a42fa8d8b5d8", 00:11:19.372 "is_configured": true, 00:11:19.372 "data_offset": 2048, 00:11:19.372 "data_size": 63488 00:11:19.372 }, 00:11:19.372 { 00:11:19.372 "name": "BaseBdev4", 00:11:19.372 "uuid": "d6c589c7-67e4-4c39-8253-8443c040d2ca", 00:11:19.372 "is_configured": true, 00:11:19.372 "data_offset": 2048, 00:11:19.372 "data_size": 63488 00:11:19.372 } 00:11:19.372 ] 00:11:19.372 }' 00:11:19.372 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.372 10:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.944 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:19.944 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.944 10:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.944 10:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.944 10:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.944 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:19.944 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:19.944 10:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.944 10:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.944 [2024-11-15 10:56:26.704459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:19.944 10:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.944 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:19.944 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.944 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.944 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.944 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.944 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.944 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.944 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.944 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.944 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.944 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.944 10:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.944 10:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.944 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.944 10:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.944 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.944 "name": "Existed_Raid", 00:11:19.944 "uuid": "12d6dbcc-e228-418c-a914-133d9032f870", 00:11:19.944 "strip_size_kb": 64, 00:11:19.944 "state": "configuring", 00:11:19.944 "raid_level": "concat", 00:11:19.944 "superblock": true, 00:11:19.944 "num_base_bdevs": 4, 00:11:19.944 "num_base_bdevs_discovered": 3, 00:11:19.944 "num_base_bdevs_operational": 4, 00:11:19.944 "base_bdevs_list": [ 00:11:19.944 { 00:11:19.944 "name": null, 00:11:19.944 "uuid": "8649e65f-5d4b-4fb7-8212-6cba1b0746de", 00:11:19.944 "is_configured": false, 00:11:19.944 "data_offset": 0, 00:11:19.944 "data_size": 63488 00:11:19.944 }, 00:11:19.944 { 00:11:19.944 "name": "BaseBdev2", 00:11:19.944 "uuid": "8e649886-0b50-42be-8459-bae68452f140", 00:11:19.944 "is_configured": true, 00:11:19.944 "data_offset": 2048, 00:11:19.944 "data_size": 63488 00:11:19.944 }, 00:11:19.944 { 00:11:19.944 "name": "BaseBdev3", 00:11:19.944 "uuid": "e9144f91-1337-4697-b169-a42fa8d8b5d8", 00:11:19.944 "is_configured": true, 00:11:19.944 "data_offset": 2048, 00:11:19.944 "data_size": 63488 00:11:19.944 }, 00:11:19.944 { 00:11:19.944 "name": "BaseBdev4", 00:11:19.944 "uuid": "d6c589c7-67e4-4c39-8253-8443c040d2ca", 00:11:19.944 "is_configured": true, 00:11:19.944 "data_offset": 2048, 00:11:19.944 "data_size": 63488 00:11:19.944 } 00:11:19.944 ] 00:11:19.944 }' 00:11:19.944 10:56:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.944 10:56:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.513 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.513 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.513 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.513 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:20.513 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.513 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:20.513 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.513 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:20.513 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.513 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.513 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.513 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8649e65f-5d4b-4fb7-8212-6cba1b0746de 00:11:20.513 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.513 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.513 [2024-11-15 10:56:27.346129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:20.513 [2024-11-15 10:56:27.346451] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:20.513 [2024-11-15 10:56:27.346466] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:20.513 [2024-11-15 10:56:27.346766] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:20.513 [2024-11-15 10:56:27.346959] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:20.513 [2024-11-15 10:56:27.346979] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:20.513 NewBaseBdev 00:11:20.513 [2024-11-15 10:56:27.347123] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.513 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.513 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:20.513 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:20.513 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:20.513 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:20.513 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:20.513 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:20.513 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:20.513 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.513 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.513 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.513 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:20.513 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.513 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.513 [ 00:11:20.513 { 00:11:20.513 "name": "NewBaseBdev", 00:11:20.513 "aliases": [ 00:11:20.513 "8649e65f-5d4b-4fb7-8212-6cba1b0746de" 00:11:20.513 ], 00:11:20.513 "product_name": "Malloc disk", 00:11:20.513 "block_size": 512, 00:11:20.513 "num_blocks": 65536, 00:11:20.513 "uuid": "8649e65f-5d4b-4fb7-8212-6cba1b0746de", 00:11:20.513 "assigned_rate_limits": { 00:11:20.513 "rw_ios_per_sec": 0, 00:11:20.513 "rw_mbytes_per_sec": 0, 00:11:20.513 "r_mbytes_per_sec": 0, 00:11:20.514 "w_mbytes_per_sec": 0 00:11:20.514 }, 00:11:20.514 "claimed": true, 00:11:20.514 "claim_type": "exclusive_write", 00:11:20.514 "zoned": false, 00:11:20.514 "supported_io_types": { 00:11:20.514 "read": true, 00:11:20.514 "write": true, 00:11:20.514 "unmap": true, 00:11:20.514 "flush": true, 00:11:20.514 "reset": true, 00:11:20.514 "nvme_admin": false, 00:11:20.514 "nvme_io": false, 00:11:20.514 "nvme_io_md": false, 00:11:20.514 "write_zeroes": true, 00:11:20.514 "zcopy": true, 00:11:20.514 "get_zone_info": false, 00:11:20.514 "zone_management": false, 00:11:20.514 "zone_append": false, 00:11:20.514 "compare": false, 00:11:20.514 "compare_and_write": false, 00:11:20.514 "abort": true, 00:11:20.514 "seek_hole": false, 00:11:20.514 "seek_data": false, 00:11:20.514 "copy": true, 00:11:20.514 "nvme_iov_md": false 00:11:20.514 }, 00:11:20.514 "memory_domains": [ 00:11:20.514 { 00:11:20.514 "dma_device_id": "system", 00:11:20.514 "dma_device_type": 1 00:11:20.514 }, 00:11:20.514 { 00:11:20.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.514 "dma_device_type": 2 00:11:20.514 } 00:11:20.514 ], 00:11:20.514 "driver_specific": {} 00:11:20.514 } 00:11:20.514 ] 00:11:20.514 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.514 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:20.514 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:20.514 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.514 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.514 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.514 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.514 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.514 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.514 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.514 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.514 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.514 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.514 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.514 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.514 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.514 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.772 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.772 "name": "Existed_Raid", 00:11:20.772 "uuid": "12d6dbcc-e228-418c-a914-133d9032f870", 00:11:20.772 "strip_size_kb": 64, 00:11:20.772 "state": "online", 00:11:20.772 "raid_level": "concat", 00:11:20.772 "superblock": true, 00:11:20.772 "num_base_bdevs": 4, 00:11:20.772 "num_base_bdevs_discovered": 4, 00:11:20.772 "num_base_bdevs_operational": 4, 00:11:20.772 "base_bdevs_list": [ 00:11:20.772 { 00:11:20.772 "name": "NewBaseBdev", 00:11:20.772 "uuid": "8649e65f-5d4b-4fb7-8212-6cba1b0746de", 00:11:20.772 "is_configured": true, 00:11:20.772 "data_offset": 2048, 00:11:20.772 "data_size": 63488 00:11:20.772 }, 00:11:20.772 { 00:11:20.772 "name": "BaseBdev2", 00:11:20.773 "uuid": "8e649886-0b50-42be-8459-bae68452f140", 00:11:20.773 "is_configured": true, 00:11:20.773 "data_offset": 2048, 00:11:20.773 "data_size": 63488 00:11:20.773 }, 00:11:20.773 { 00:11:20.773 "name": "BaseBdev3", 00:11:20.773 "uuid": "e9144f91-1337-4697-b169-a42fa8d8b5d8", 00:11:20.773 "is_configured": true, 00:11:20.773 "data_offset": 2048, 00:11:20.773 "data_size": 63488 00:11:20.773 }, 00:11:20.773 { 00:11:20.773 "name": "BaseBdev4", 00:11:20.773 "uuid": "d6c589c7-67e4-4c39-8253-8443c040d2ca", 00:11:20.773 "is_configured": true, 00:11:20.773 "data_offset": 2048, 00:11:20.773 "data_size": 63488 00:11:20.773 } 00:11:20.773 ] 00:11:20.773 }' 00:11:20.773 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.773 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.032 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:21.032 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:21.032 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:21.032 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:21.032 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:21.032 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:21.032 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:21.032 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.032 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.032 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:21.032 [2024-11-15 10:56:27.865826] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:21.032 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.032 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:21.032 "name": "Existed_Raid", 00:11:21.032 "aliases": [ 00:11:21.032 "12d6dbcc-e228-418c-a914-133d9032f870" 00:11:21.032 ], 00:11:21.032 "product_name": "Raid Volume", 00:11:21.032 "block_size": 512, 00:11:21.032 "num_blocks": 253952, 00:11:21.032 "uuid": "12d6dbcc-e228-418c-a914-133d9032f870", 00:11:21.032 "assigned_rate_limits": { 00:11:21.032 "rw_ios_per_sec": 0, 00:11:21.032 "rw_mbytes_per_sec": 0, 00:11:21.032 "r_mbytes_per_sec": 0, 00:11:21.032 "w_mbytes_per_sec": 0 00:11:21.032 }, 00:11:21.032 "claimed": false, 00:11:21.032 "zoned": false, 00:11:21.032 "supported_io_types": { 00:11:21.032 "read": true, 00:11:21.032 "write": true, 00:11:21.032 "unmap": true, 00:11:21.032 "flush": true, 00:11:21.032 "reset": true, 00:11:21.032 "nvme_admin": false, 00:11:21.032 "nvme_io": false, 00:11:21.032 "nvme_io_md": false, 00:11:21.032 "write_zeroes": true, 00:11:21.032 "zcopy": false, 00:11:21.032 "get_zone_info": false, 00:11:21.032 "zone_management": false, 00:11:21.032 "zone_append": false, 00:11:21.032 "compare": false, 00:11:21.032 "compare_and_write": false, 00:11:21.032 "abort": false, 00:11:21.032 "seek_hole": false, 00:11:21.032 "seek_data": false, 00:11:21.032 "copy": false, 00:11:21.032 "nvme_iov_md": false 00:11:21.032 }, 00:11:21.032 "memory_domains": [ 00:11:21.032 { 00:11:21.032 "dma_device_id": "system", 00:11:21.032 "dma_device_type": 1 00:11:21.032 }, 00:11:21.032 { 00:11:21.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.032 "dma_device_type": 2 00:11:21.032 }, 00:11:21.032 { 00:11:21.032 "dma_device_id": "system", 00:11:21.032 "dma_device_type": 1 00:11:21.032 }, 00:11:21.032 { 00:11:21.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.032 "dma_device_type": 2 00:11:21.032 }, 00:11:21.032 { 00:11:21.032 "dma_device_id": "system", 00:11:21.032 "dma_device_type": 1 00:11:21.032 }, 00:11:21.032 { 00:11:21.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.032 "dma_device_type": 2 00:11:21.032 }, 00:11:21.032 { 00:11:21.032 "dma_device_id": "system", 00:11:21.032 "dma_device_type": 1 00:11:21.032 }, 00:11:21.032 { 00:11:21.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.032 "dma_device_type": 2 00:11:21.032 } 00:11:21.032 ], 00:11:21.032 "driver_specific": { 00:11:21.032 "raid": { 00:11:21.032 "uuid": "12d6dbcc-e228-418c-a914-133d9032f870", 00:11:21.032 "strip_size_kb": 64, 00:11:21.032 "state": "online", 00:11:21.032 "raid_level": "concat", 00:11:21.032 "superblock": true, 00:11:21.032 "num_base_bdevs": 4, 00:11:21.032 "num_base_bdevs_discovered": 4, 00:11:21.032 "num_base_bdevs_operational": 4, 00:11:21.032 "base_bdevs_list": [ 00:11:21.032 { 00:11:21.032 "name": "NewBaseBdev", 00:11:21.032 "uuid": "8649e65f-5d4b-4fb7-8212-6cba1b0746de", 00:11:21.032 "is_configured": true, 00:11:21.032 "data_offset": 2048, 00:11:21.032 "data_size": 63488 00:11:21.032 }, 00:11:21.032 { 00:11:21.032 "name": "BaseBdev2", 00:11:21.032 "uuid": "8e649886-0b50-42be-8459-bae68452f140", 00:11:21.032 "is_configured": true, 00:11:21.032 "data_offset": 2048, 00:11:21.032 "data_size": 63488 00:11:21.032 }, 00:11:21.032 { 00:11:21.032 "name": "BaseBdev3", 00:11:21.032 "uuid": "e9144f91-1337-4697-b169-a42fa8d8b5d8", 00:11:21.032 "is_configured": true, 00:11:21.032 "data_offset": 2048, 00:11:21.032 "data_size": 63488 00:11:21.032 }, 00:11:21.032 { 00:11:21.032 "name": "BaseBdev4", 00:11:21.032 "uuid": "d6c589c7-67e4-4c39-8253-8443c040d2ca", 00:11:21.032 "is_configured": true, 00:11:21.032 "data_offset": 2048, 00:11:21.032 "data_size": 63488 00:11:21.032 } 00:11:21.032 ] 00:11:21.032 } 00:11:21.032 } 00:11:21.032 }' 00:11:21.032 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:21.032 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:21.032 BaseBdev2 00:11:21.032 BaseBdev3 00:11:21.032 BaseBdev4' 00:11:21.032 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.291 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:21.291 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.291 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:21.291 10:56:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.291 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.292 10:56:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.292 [2024-11-15 10:56:28.176833] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:21.292 [2024-11-15 10:56:28.176865] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:21.292 [2024-11-15 10:56:28.176951] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:21.292 [2024-11-15 10:56:28.177031] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:21.292 [2024-11-15 10:56:28.177042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72105 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 72105 ']' 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 72105 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72105 00:11:21.292 10:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:21.551 killing process with pid 72105 00:11:21.551 10:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:21.551 10:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72105' 00:11:21.551 10:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 72105 00:11:21.551 [2024-11-15 10:56:28.217295] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:21.551 10:56:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 72105 00:11:21.810 [2024-11-15 10:56:28.638153] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:23.245 10:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:23.245 00:11:23.245 real 0m12.153s 00:11:23.245 user 0m19.375s 00:11:23.245 sys 0m2.156s 00:11:23.245 ************************************ 00:11:23.245 END TEST raid_state_function_test_sb 00:11:23.245 ************************************ 00:11:23.245 10:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:23.245 10:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.245 10:56:29 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:23.245 10:56:29 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:23.245 10:56:29 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:23.245 10:56:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:23.245 ************************************ 00:11:23.245 START TEST raid_superblock_test 00:11:23.245 ************************************ 00:11:23.245 10:56:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 4 00:11:23.245 10:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:23.245 10:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:23.245 10:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:23.245 10:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:23.245 10:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:23.245 10:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:23.245 10:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:23.245 10:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:23.245 10:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:23.245 10:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:23.245 10:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:23.245 10:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:23.245 10:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:23.245 10:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:23.245 10:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:23.245 10:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:23.245 10:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72781 00:11:23.245 10:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:23.245 10:56:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72781 00:11:23.245 10:56:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 72781 ']' 00:11:23.245 10:56:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.245 10:56:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:23.245 10:56:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.245 10:56:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:23.245 10:56:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.245 [2024-11-15 10:56:29.950246] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:11:23.245 [2024-11-15 10:56:29.950492] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72781 ] 00:11:23.245 [2024-11-15 10:56:30.111410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.504 [2024-11-15 10:56:30.245388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.764 [2024-11-15 10:56:30.489807] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.764 [2024-11-15 10:56:30.489862] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.023 10:56:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:24.023 10:56:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:11:24.023 10:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:24.023 10:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:24.023 10:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:24.023 10:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:24.023 10:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:24.023 10:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:24.023 10:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:24.023 10:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:24.023 10:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:24.023 10:56:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.023 10:56:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.023 malloc1 00:11:24.023 10:56:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.023 10:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:24.023 10:56:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.023 10:56:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.023 [2024-11-15 10:56:30.934101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:24.023 [2024-11-15 10:56:30.934251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.023 [2024-11-15 10:56:30.934324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:24.023 [2024-11-15 10:56:30.934364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.023 [2024-11-15 10:56:30.936834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.023 [2024-11-15 10:56:30.936919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:24.023 pt1 00:11:24.023 10:56:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.024 10:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:24.024 10:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:24.024 10:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:24.024 10:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:24.024 10:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:24.024 10:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:24.024 10:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:24.024 10:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:24.024 10:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:24.024 10:56:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.024 10:56:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.283 malloc2 00:11:24.283 10:56:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.283 10:56:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:24.283 10:56:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.283 10:56:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.283 [2024-11-15 10:56:31.001459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:24.284 [2024-11-15 10:56:31.001532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.284 [2024-11-15 10:56:31.001560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:24.284 [2024-11-15 10:56:31.001571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.284 [2024-11-15 10:56:31.004003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.284 [2024-11-15 10:56:31.004046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:24.284 pt2 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.284 malloc3 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.284 [2024-11-15 10:56:31.073138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:24.284 [2024-11-15 10:56:31.073262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.284 [2024-11-15 10:56:31.073327] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:24.284 [2024-11-15 10:56:31.073390] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.284 [2024-11-15 10:56:31.075911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.284 [2024-11-15 10:56:31.075999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:24.284 pt3 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.284 malloc4 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.284 [2024-11-15 10:56:31.138607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:24.284 [2024-11-15 10:56:31.138732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.284 [2024-11-15 10:56:31.138774] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:24.284 [2024-11-15 10:56:31.138808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.284 [2024-11-15 10:56:31.141358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.284 [2024-11-15 10:56:31.141440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:24.284 pt4 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.284 [2024-11-15 10:56:31.150643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:24.284 [2024-11-15 10:56:31.152833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:24.284 [2024-11-15 10:56:31.152968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:24.284 [2024-11-15 10:56:31.153083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:24.284 [2024-11-15 10:56:31.153387] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:24.284 [2024-11-15 10:56:31.153443] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:24.284 [2024-11-15 10:56:31.153797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:24.284 [2024-11-15 10:56:31.154048] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:24.284 [2024-11-15 10:56:31.154069] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:24.284 [2024-11-15 10:56:31.154272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.284 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.544 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.544 "name": "raid_bdev1", 00:11:24.544 "uuid": "74ab0011-b635-42d7-9799-005c36dff22c", 00:11:24.544 "strip_size_kb": 64, 00:11:24.544 "state": "online", 00:11:24.544 "raid_level": "concat", 00:11:24.544 "superblock": true, 00:11:24.544 "num_base_bdevs": 4, 00:11:24.544 "num_base_bdevs_discovered": 4, 00:11:24.544 "num_base_bdevs_operational": 4, 00:11:24.544 "base_bdevs_list": [ 00:11:24.544 { 00:11:24.544 "name": "pt1", 00:11:24.544 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:24.544 "is_configured": true, 00:11:24.544 "data_offset": 2048, 00:11:24.544 "data_size": 63488 00:11:24.544 }, 00:11:24.544 { 00:11:24.544 "name": "pt2", 00:11:24.544 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:24.544 "is_configured": true, 00:11:24.544 "data_offset": 2048, 00:11:24.544 "data_size": 63488 00:11:24.544 }, 00:11:24.544 { 00:11:24.544 "name": "pt3", 00:11:24.544 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:24.544 "is_configured": true, 00:11:24.544 "data_offset": 2048, 00:11:24.544 "data_size": 63488 00:11:24.544 }, 00:11:24.544 { 00:11:24.544 "name": "pt4", 00:11:24.544 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:24.544 "is_configured": true, 00:11:24.544 "data_offset": 2048, 00:11:24.544 "data_size": 63488 00:11:24.544 } 00:11:24.544 ] 00:11:24.544 }' 00:11:24.544 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.544 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.804 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:24.804 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:24.804 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:24.804 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:24.805 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:24.805 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:24.805 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:24.805 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:24.805 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.805 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.805 [2024-11-15 10:56:31.658178] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:24.805 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.805 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:24.805 "name": "raid_bdev1", 00:11:24.805 "aliases": [ 00:11:24.805 "74ab0011-b635-42d7-9799-005c36dff22c" 00:11:24.805 ], 00:11:24.805 "product_name": "Raid Volume", 00:11:24.805 "block_size": 512, 00:11:24.805 "num_blocks": 253952, 00:11:24.805 "uuid": "74ab0011-b635-42d7-9799-005c36dff22c", 00:11:24.805 "assigned_rate_limits": { 00:11:24.805 "rw_ios_per_sec": 0, 00:11:24.805 "rw_mbytes_per_sec": 0, 00:11:24.805 "r_mbytes_per_sec": 0, 00:11:24.805 "w_mbytes_per_sec": 0 00:11:24.805 }, 00:11:24.805 "claimed": false, 00:11:24.805 "zoned": false, 00:11:24.805 "supported_io_types": { 00:11:24.805 "read": true, 00:11:24.805 "write": true, 00:11:24.805 "unmap": true, 00:11:24.805 "flush": true, 00:11:24.805 "reset": true, 00:11:24.805 "nvme_admin": false, 00:11:24.805 "nvme_io": false, 00:11:24.805 "nvme_io_md": false, 00:11:24.805 "write_zeroes": true, 00:11:24.805 "zcopy": false, 00:11:24.805 "get_zone_info": false, 00:11:24.805 "zone_management": false, 00:11:24.805 "zone_append": false, 00:11:24.805 "compare": false, 00:11:24.805 "compare_and_write": false, 00:11:24.805 "abort": false, 00:11:24.805 "seek_hole": false, 00:11:24.805 "seek_data": false, 00:11:24.805 "copy": false, 00:11:24.805 "nvme_iov_md": false 00:11:24.805 }, 00:11:24.805 "memory_domains": [ 00:11:24.805 { 00:11:24.805 "dma_device_id": "system", 00:11:24.805 "dma_device_type": 1 00:11:24.805 }, 00:11:24.805 { 00:11:24.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.805 "dma_device_type": 2 00:11:24.805 }, 00:11:24.805 { 00:11:24.805 "dma_device_id": "system", 00:11:24.805 "dma_device_type": 1 00:11:24.805 }, 00:11:24.805 { 00:11:24.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.805 "dma_device_type": 2 00:11:24.805 }, 00:11:24.805 { 00:11:24.805 "dma_device_id": "system", 00:11:24.805 "dma_device_type": 1 00:11:24.805 }, 00:11:24.805 { 00:11:24.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.805 "dma_device_type": 2 00:11:24.805 }, 00:11:24.805 { 00:11:24.805 "dma_device_id": "system", 00:11:24.805 "dma_device_type": 1 00:11:24.805 }, 00:11:24.805 { 00:11:24.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.805 "dma_device_type": 2 00:11:24.805 } 00:11:24.805 ], 00:11:24.805 "driver_specific": { 00:11:24.805 "raid": { 00:11:24.805 "uuid": "74ab0011-b635-42d7-9799-005c36dff22c", 00:11:24.805 "strip_size_kb": 64, 00:11:24.805 "state": "online", 00:11:24.805 "raid_level": "concat", 00:11:24.805 "superblock": true, 00:11:24.805 "num_base_bdevs": 4, 00:11:24.805 "num_base_bdevs_discovered": 4, 00:11:24.805 "num_base_bdevs_operational": 4, 00:11:24.805 "base_bdevs_list": [ 00:11:24.805 { 00:11:24.805 "name": "pt1", 00:11:24.805 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:24.805 "is_configured": true, 00:11:24.805 "data_offset": 2048, 00:11:24.805 "data_size": 63488 00:11:24.805 }, 00:11:24.805 { 00:11:24.805 "name": "pt2", 00:11:24.805 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:24.805 "is_configured": true, 00:11:24.805 "data_offset": 2048, 00:11:24.805 "data_size": 63488 00:11:24.805 }, 00:11:24.805 { 00:11:24.805 "name": "pt3", 00:11:24.805 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:24.805 "is_configured": true, 00:11:24.805 "data_offset": 2048, 00:11:24.805 "data_size": 63488 00:11:24.805 }, 00:11:24.805 { 00:11:24.805 "name": "pt4", 00:11:24.805 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:24.805 "is_configured": true, 00:11:24.805 "data_offset": 2048, 00:11:24.805 "data_size": 63488 00:11:24.805 } 00:11:24.805 ] 00:11:24.805 } 00:11:24.805 } 00:11:24.805 }' 00:11:24.805 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:25.065 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:25.065 pt2 00:11:25.065 pt3 00:11:25.066 pt4' 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.066 10:56:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.326 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.326 10:56:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.326 [2024-11-15 10:56:32.013602] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=74ab0011-b635-42d7-9799-005c36dff22c 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 74ab0011-b635-42d7-9799-005c36dff22c ']' 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.326 [2024-11-15 10:56:32.061137] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:25.326 [2024-11-15 10:56:32.061243] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:25.326 [2024-11-15 10:56:32.061389] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:25.326 [2024-11-15 10:56:32.061502] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:25.326 [2024-11-15 10:56:32.061556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.326 [2024-11-15 10:56:32.220883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:25.326 [2024-11-15 10:56:32.223097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:25.326 [2024-11-15 10:56:32.223200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:25.326 [2024-11-15 10:56:32.223291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:25.326 [2024-11-15 10:56:32.223404] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:25.326 [2024-11-15 10:56:32.223514] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:25.326 [2024-11-15 10:56:32.223542] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:25.326 [2024-11-15 10:56:32.223564] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:25.326 [2024-11-15 10:56:32.223581] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:25.326 [2024-11-15 10:56:32.223594] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:25.326 request: 00:11:25.326 { 00:11:25.326 "name": "raid_bdev1", 00:11:25.326 "raid_level": "concat", 00:11:25.326 "base_bdevs": [ 00:11:25.326 "malloc1", 00:11:25.326 "malloc2", 00:11:25.326 "malloc3", 00:11:25.326 "malloc4" 00:11:25.326 ], 00:11:25.326 "strip_size_kb": 64, 00:11:25.326 "superblock": false, 00:11:25.326 "method": "bdev_raid_create", 00:11:25.326 "req_id": 1 00:11:25.326 } 00:11:25.326 Got JSON-RPC error response 00:11:25.326 response: 00:11:25.326 { 00:11:25.326 "code": -17, 00:11:25.326 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:25.326 } 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.326 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.585 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:25.585 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:25.585 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:25.585 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.585 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.585 [2024-11-15 10:56:32.284744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:25.585 [2024-11-15 10:56:32.284823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.585 [2024-11-15 10:56:32.284843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:25.585 [2024-11-15 10:56:32.284856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.585 [2024-11-15 10:56:32.287421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.585 [2024-11-15 10:56:32.287469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:25.585 [2024-11-15 10:56:32.287562] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:25.585 [2024-11-15 10:56:32.287644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:25.585 pt1 00:11:25.585 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.585 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:25.585 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.585 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.585 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:25.585 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.585 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.585 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.585 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.585 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.585 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.585 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.585 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.585 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.585 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.585 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.585 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.586 "name": "raid_bdev1", 00:11:25.586 "uuid": "74ab0011-b635-42d7-9799-005c36dff22c", 00:11:25.586 "strip_size_kb": 64, 00:11:25.586 "state": "configuring", 00:11:25.586 "raid_level": "concat", 00:11:25.586 "superblock": true, 00:11:25.586 "num_base_bdevs": 4, 00:11:25.586 "num_base_bdevs_discovered": 1, 00:11:25.586 "num_base_bdevs_operational": 4, 00:11:25.586 "base_bdevs_list": [ 00:11:25.586 { 00:11:25.586 "name": "pt1", 00:11:25.586 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:25.586 "is_configured": true, 00:11:25.586 "data_offset": 2048, 00:11:25.586 "data_size": 63488 00:11:25.586 }, 00:11:25.586 { 00:11:25.586 "name": null, 00:11:25.586 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:25.586 "is_configured": false, 00:11:25.586 "data_offset": 2048, 00:11:25.586 "data_size": 63488 00:11:25.586 }, 00:11:25.586 { 00:11:25.586 "name": null, 00:11:25.586 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:25.586 "is_configured": false, 00:11:25.586 "data_offset": 2048, 00:11:25.586 "data_size": 63488 00:11:25.586 }, 00:11:25.586 { 00:11:25.586 "name": null, 00:11:25.586 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:25.586 "is_configured": false, 00:11:25.586 "data_offset": 2048, 00:11:25.586 "data_size": 63488 00:11:25.586 } 00:11:25.586 ] 00:11:25.586 }' 00:11:25.586 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.586 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.845 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:25.845 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:25.845 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.845 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.845 [2024-11-15 10:56:32.708164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:25.845 [2024-11-15 10:56:32.708337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.845 [2024-11-15 10:56:32.708367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:25.845 [2024-11-15 10:56:32.708380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.845 [2024-11-15 10:56:32.708868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.845 [2024-11-15 10:56:32.708892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:25.845 [2024-11-15 10:56:32.708986] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:25.845 [2024-11-15 10:56:32.709015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:25.845 pt2 00:11:25.845 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.845 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:25.845 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.845 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.845 [2024-11-15 10:56:32.716159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:25.845 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.845 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:25.845 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.845 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.845 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:25.845 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.845 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.845 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.845 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.845 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.845 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.845 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.845 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.845 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.845 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.845 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.845 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.845 "name": "raid_bdev1", 00:11:25.845 "uuid": "74ab0011-b635-42d7-9799-005c36dff22c", 00:11:25.845 "strip_size_kb": 64, 00:11:25.846 "state": "configuring", 00:11:25.846 "raid_level": "concat", 00:11:25.846 "superblock": true, 00:11:25.846 "num_base_bdevs": 4, 00:11:25.846 "num_base_bdevs_discovered": 1, 00:11:25.846 "num_base_bdevs_operational": 4, 00:11:25.846 "base_bdevs_list": [ 00:11:25.846 { 00:11:25.846 "name": "pt1", 00:11:25.846 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:25.846 "is_configured": true, 00:11:25.846 "data_offset": 2048, 00:11:25.846 "data_size": 63488 00:11:25.846 }, 00:11:25.846 { 00:11:25.846 "name": null, 00:11:25.846 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:25.846 "is_configured": false, 00:11:25.846 "data_offset": 0, 00:11:25.846 "data_size": 63488 00:11:25.846 }, 00:11:25.846 { 00:11:25.846 "name": null, 00:11:25.846 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:25.846 "is_configured": false, 00:11:25.846 "data_offset": 2048, 00:11:25.846 "data_size": 63488 00:11:25.846 }, 00:11:25.846 { 00:11:25.846 "name": null, 00:11:25.846 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:25.846 "is_configured": false, 00:11:25.846 "data_offset": 2048, 00:11:25.846 "data_size": 63488 00:11:25.846 } 00:11:25.846 ] 00:11:25.846 }' 00:11:25.846 10:56:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.846 10:56:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.416 [2024-11-15 10:56:33.187478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:26.416 [2024-11-15 10:56:33.187614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.416 [2024-11-15 10:56:33.187668] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:26.416 [2024-11-15 10:56:33.187708] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.416 [2024-11-15 10:56:33.188253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.416 [2024-11-15 10:56:33.188338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:26.416 [2024-11-15 10:56:33.188486] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:26.416 [2024-11-15 10:56:33.188546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:26.416 pt2 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.416 [2024-11-15 10:56:33.199450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:26.416 [2024-11-15 10:56:33.199549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.416 [2024-11-15 10:56:33.199596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:26.416 [2024-11-15 10:56:33.199660] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.416 [2024-11-15 10:56:33.200134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.416 [2024-11-15 10:56:33.200205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:26.416 [2024-11-15 10:56:33.200340] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:26.416 [2024-11-15 10:56:33.200399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:26.416 pt3 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.416 [2024-11-15 10:56:33.211415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:26.416 [2024-11-15 10:56:33.211472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.416 [2024-11-15 10:56:33.211494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:26.416 [2024-11-15 10:56:33.211504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.416 [2024-11-15 10:56:33.211936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.416 [2024-11-15 10:56:33.211957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:26.416 [2024-11-15 10:56:33.212027] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:26.416 [2024-11-15 10:56:33.212048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:26.416 [2024-11-15 10:56:33.212212] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:26.416 [2024-11-15 10:56:33.212222] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:26.416 [2024-11-15 10:56:33.212510] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:26.416 [2024-11-15 10:56:33.212700] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:26.416 [2024-11-15 10:56:33.212717] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:26.416 [2024-11-15 10:56:33.212886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.416 pt4 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.416 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.416 "name": "raid_bdev1", 00:11:26.416 "uuid": "74ab0011-b635-42d7-9799-005c36dff22c", 00:11:26.416 "strip_size_kb": 64, 00:11:26.416 "state": "online", 00:11:26.416 "raid_level": "concat", 00:11:26.416 "superblock": true, 00:11:26.416 "num_base_bdevs": 4, 00:11:26.416 "num_base_bdevs_discovered": 4, 00:11:26.416 "num_base_bdevs_operational": 4, 00:11:26.416 "base_bdevs_list": [ 00:11:26.416 { 00:11:26.416 "name": "pt1", 00:11:26.416 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:26.416 "is_configured": true, 00:11:26.416 "data_offset": 2048, 00:11:26.416 "data_size": 63488 00:11:26.416 }, 00:11:26.416 { 00:11:26.416 "name": "pt2", 00:11:26.416 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:26.416 "is_configured": true, 00:11:26.416 "data_offset": 2048, 00:11:26.416 "data_size": 63488 00:11:26.416 }, 00:11:26.416 { 00:11:26.416 "name": "pt3", 00:11:26.416 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:26.416 "is_configured": true, 00:11:26.416 "data_offset": 2048, 00:11:26.416 "data_size": 63488 00:11:26.416 }, 00:11:26.416 { 00:11:26.416 "name": "pt4", 00:11:26.416 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:26.416 "is_configured": true, 00:11:26.417 "data_offset": 2048, 00:11:26.417 "data_size": 63488 00:11:26.417 } 00:11:26.417 ] 00:11:26.417 }' 00:11:26.417 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.417 10:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.007 [2024-11-15 10:56:33.722971] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:27.007 "name": "raid_bdev1", 00:11:27.007 "aliases": [ 00:11:27.007 "74ab0011-b635-42d7-9799-005c36dff22c" 00:11:27.007 ], 00:11:27.007 "product_name": "Raid Volume", 00:11:27.007 "block_size": 512, 00:11:27.007 "num_blocks": 253952, 00:11:27.007 "uuid": "74ab0011-b635-42d7-9799-005c36dff22c", 00:11:27.007 "assigned_rate_limits": { 00:11:27.007 "rw_ios_per_sec": 0, 00:11:27.007 "rw_mbytes_per_sec": 0, 00:11:27.007 "r_mbytes_per_sec": 0, 00:11:27.007 "w_mbytes_per_sec": 0 00:11:27.007 }, 00:11:27.007 "claimed": false, 00:11:27.007 "zoned": false, 00:11:27.007 "supported_io_types": { 00:11:27.007 "read": true, 00:11:27.007 "write": true, 00:11:27.007 "unmap": true, 00:11:27.007 "flush": true, 00:11:27.007 "reset": true, 00:11:27.007 "nvme_admin": false, 00:11:27.007 "nvme_io": false, 00:11:27.007 "nvme_io_md": false, 00:11:27.007 "write_zeroes": true, 00:11:27.007 "zcopy": false, 00:11:27.007 "get_zone_info": false, 00:11:27.007 "zone_management": false, 00:11:27.007 "zone_append": false, 00:11:27.007 "compare": false, 00:11:27.007 "compare_and_write": false, 00:11:27.007 "abort": false, 00:11:27.007 "seek_hole": false, 00:11:27.007 "seek_data": false, 00:11:27.007 "copy": false, 00:11:27.007 "nvme_iov_md": false 00:11:27.007 }, 00:11:27.007 "memory_domains": [ 00:11:27.007 { 00:11:27.007 "dma_device_id": "system", 00:11:27.007 "dma_device_type": 1 00:11:27.007 }, 00:11:27.007 { 00:11:27.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.007 "dma_device_type": 2 00:11:27.007 }, 00:11:27.007 { 00:11:27.007 "dma_device_id": "system", 00:11:27.007 "dma_device_type": 1 00:11:27.007 }, 00:11:27.007 { 00:11:27.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.007 "dma_device_type": 2 00:11:27.007 }, 00:11:27.007 { 00:11:27.007 "dma_device_id": "system", 00:11:27.007 "dma_device_type": 1 00:11:27.007 }, 00:11:27.007 { 00:11:27.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.007 "dma_device_type": 2 00:11:27.007 }, 00:11:27.007 { 00:11:27.007 "dma_device_id": "system", 00:11:27.007 "dma_device_type": 1 00:11:27.007 }, 00:11:27.007 { 00:11:27.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.007 "dma_device_type": 2 00:11:27.007 } 00:11:27.007 ], 00:11:27.007 "driver_specific": { 00:11:27.007 "raid": { 00:11:27.007 "uuid": "74ab0011-b635-42d7-9799-005c36dff22c", 00:11:27.007 "strip_size_kb": 64, 00:11:27.007 "state": "online", 00:11:27.007 "raid_level": "concat", 00:11:27.007 "superblock": true, 00:11:27.007 "num_base_bdevs": 4, 00:11:27.007 "num_base_bdevs_discovered": 4, 00:11:27.007 "num_base_bdevs_operational": 4, 00:11:27.007 "base_bdevs_list": [ 00:11:27.007 { 00:11:27.007 "name": "pt1", 00:11:27.007 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:27.007 "is_configured": true, 00:11:27.007 "data_offset": 2048, 00:11:27.007 "data_size": 63488 00:11:27.007 }, 00:11:27.007 { 00:11:27.007 "name": "pt2", 00:11:27.007 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:27.007 "is_configured": true, 00:11:27.007 "data_offset": 2048, 00:11:27.007 "data_size": 63488 00:11:27.007 }, 00:11:27.007 { 00:11:27.007 "name": "pt3", 00:11:27.007 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:27.007 "is_configured": true, 00:11:27.007 "data_offset": 2048, 00:11:27.007 "data_size": 63488 00:11:27.007 }, 00:11:27.007 { 00:11:27.007 "name": "pt4", 00:11:27.007 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:27.007 "is_configured": true, 00:11:27.007 "data_offset": 2048, 00:11:27.007 "data_size": 63488 00:11:27.007 } 00:11:27.007 ] 00:11:27.007 } 00:11:27.007 } 00:11:27.007 }' 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:27.007 pt2 00:11:27.007 pt3 00:11:27.007 pt4' 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.007 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.277 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:27.277 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.277 10:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.277 10:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.277 10:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.277 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.277 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.277 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.277 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.277 10:56:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:27.277 10:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.277 10:56:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.277 10:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.277 10:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.277 10:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.277 10:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:27.277 10:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:27.277 10:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.277 10:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.277 [2024-11-15 10:56:34.022583] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:27.277 10:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.277 10:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 74ab0011-b635-42d7-9799-005c36dff22c '!=' 74ab0011-b635-42d7-9799-005c36dff22c ']' 00:11:27.277 10:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:27.277 10:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:27.277 10:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:27.277 10:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72781 00:11:27.277 10:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 72781 ']' 00:11:27.277 10:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 72781 00:11:27.277 10:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:11:27.277 10:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:27.277 10:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72781 00:11:27.277 10:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:27.277 10:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:27.277 killing process with pid 72781 00:11:27.277 10:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72781' 00:11:27.277 10:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 72781 00:11:27.277 [2024-11-15 10:56:34.089670] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:27.277 10:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 72781 00:11:27.277 [2024-11-15 10:56:34.089767] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:27.277 [2024-11-15 10:56:34.089857] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:27.277 [2024-11-15 10:56:34.089870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:27.846 [2024-11-15 10:56:34.572358] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:29.220 ************************************ 00:11:29.220 END TEST raid_superblock_test 00:11:29.220 ************************************ 00:11:29.221 10:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:29.221 00:11:29.221 real 0m5.917s 00:11:29.221 user 0m8.495s 00:11:29.221 sys 0m0.981s 00:11:29.221 10:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:29.221 10:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.221 10:56:35 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:29.221 10:56:35 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:29.221 10:56:35 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:29.221 10:56:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:29.221 ************************************ 00:11:29.221 START TEST raid_read_error_test 00:11:29.221 ************************************ 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 read 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.lrHv04qtam 00:11:29.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73051 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73051 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 73051 ']' 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.221 10:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:29.221 [2024-11-15 10:56:35.938319] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:11:29.221 [2024-11-15 10:56:35.938437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73051 ] 00:11:29.221 [2024-11-15 10:56:36.112762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.479 [2024-11-15 10:56:36.236627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.737 [2024-11-15 10:56:36.442105] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:29.737 [2024-11-15 10:56:36.442143] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.997 BaseBdev1_malloc 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.997 true 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.997 [2024-11-15 10:56:36.832869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:29.997 [2024-11-15 10:56:36.833001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.997 [2024-11-15 10:56:36.833030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:29.997 [2024-11-15 10:56:36.833042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.997 [2024-11-15 10:56:36.835471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.997 [2024-11-15 10:56:36.835511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:29.997 BaseBdev1 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.997 BaseBdev2_malloc 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.997 true 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.997 [2024-11-15 10:56:36.890184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:29.997 [2024-11-15 10:56:36.890290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.997 [2024-11-15 10:56:36.890341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:29.997 [2024-11-15 10:56:36.890352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.997 [2024-11-15 10:56:36.892550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.997 [2024-11-15 10:56:36.892590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:29.997 BaseBdev2 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.997 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.255 BaseBdev3_malloc 00:11:30.255 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.255 10:56:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:30.255 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.255 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.255 true 00:11:30.255 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.255 10:56:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:30.255 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.255 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.255 [2024-11-15 10:56:36.964570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:30.255 [2024-11-15 10:56:36.964691] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.255 [2024-11-15 10:56:36.964715] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:30.255 [2024-11-15 10:56:36.964726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.255 [2024-11-15 10:56:36.966926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.255 [2024-11-15 10:56:36.966968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:30.255 BaseBdev3 00:11:30.255 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.255 10:56:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:30.255 10:56:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:30.255 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.255 10:56:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.255 BaseBdev4_malloc 00:11:30.255 10:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.255 10:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:30.255 10:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.255 10:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.255 true 00:11:30.255 10:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.255 10:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:30.255 10:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.255 10:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.255 [2024-11-15 10:56:37.022093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:30.255 [2024-11-15 10:56:37.022153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.255 [2024-11-15 10:56:37.022192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:30.255 [2024-11-15 10:56:37.022202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.255 [2024-11-15 10:56:37.024541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.255 [2024-11-15 10:56:37.024585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:30.255 BaseBdev4 00:11:30.255 10:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.255 10:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:30.255 10:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.255 10:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.255 [2024-11-15 10:56:37.034196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:30.255 [2024-11-15 10:56:37.036239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:30.255 [2024-11-15 10:56:37.036351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:30.255 [2024-11-15 10:56:37.036464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:30.255 [2024-11-15 10:56:37.036726] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:30.255 [2024-11-15 10:56:37.036743] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:30.255 [2024-11-15 10:56:37.037028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:30.255 [2024-11-15 10:56:37.037207] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:30.255 [2024-11-15 10:56:37.037219] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:30.255 [2024-11-15 10:56:37.037419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.255 10:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.255 10:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:30.255 10:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:30.255 10:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.255 10:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:30.255 10:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.255 10:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.255 10:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.255 10:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.255 10:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.255 10:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.255 10:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.255 10:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.256 10:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.256 10:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.256 10:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.256 10:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.256 "name": "raid_bdev1", 00:11:30.256 "uuid": "7515a736-e66b-4169-bef1-b94c733b05de", 00:11:30.256 "strip_size_kb": 64, 00:11:30.256 "state": "online", 00:11:30.256 "raid_level": "concat", 00:11:30.256 "superblock": true, 00:11:30.256 "num_base_bdevs": 4, 00:11:30.256 "num_base_bdevs_discovered": 4, 00:11:30.256 "num_base_bdevs_operational": 4, 00:11:30.256 "base_bdevs_list": [ 00:11:30.256 { 00:11:30.256 "name": "BaseBdev1", 00:11:30.256 "uuid": "71d535c4-b863-5be8-b29a-ee4c8a9ae3bf", 00:11:30.256 "is_configured": true, 00:11:30.256 "data_offset": 2048, 00:11:30.256 "data_size": 63488 00:11:30.256 }, 00:11:30.256 { 00:11:30.256 "name": "BaseBdev2", 00:11:30.256 "uuid": "9bd7b5b7-298e-533c-b53b-46aeff57202f", 00:11:30.256 "is_configured": true, 00:11:30.256 "data_offset": 2048, 00:11:30.256 "data_size": 63488 00:11:30.256 }, 00:11:30.256 { 00:11:30.256 "name": "BaseBdev3", 00:11:30.256 "uuid": "54610778-4864-5ded-bc16-c21de0e301da", 00:11:30.256 "is_configured": true, 00:11:30.256 "data_offset": 2048, 00:11:30.256 "data_size": 63488 00:11:30.256 }, 00:11:30.256 { 00:11:30.256 "name": "BaseBdev4", 00:11:30.256 "uuid": "41d7f610-eb5f-54e0-a585-7f300ce90a93", 00:11:30.256 "is_configured": true, 00:11:30.256 "data_offset": 2048, 00:11:30.256 "data_size": 63488 00:11:30.256 } 00:11:30.256 ] 00:11:30.256 }' 00:11:30.256 10:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.256 10:56:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.819 10:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:30.819 10:56:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:30.819 [2024-11-15 10:56:37.594653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:31.754 10:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:31.754 10:56:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.754 10:56:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.754 10:56:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.754 10:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:31.754 10:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:31.754 10:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:31.754 10:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:31.754 10:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:31.754 10:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.754 10:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:31.754 10:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.754 10:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.754 10:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.754 10:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.754 10:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.754 10:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.754 10:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.754 10:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.754 10:56:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.754 10:56:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.754 10:56:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.754 10:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.754 "name": "raid_bdev1", 00:11:31.754 "uuid": "7515a736-e66b-4169-bef1-b94c733b05de", 00:11:31.754 "strip_size_kb": 64, 00:11:31.754 "state": "online", 00:11:31.754 "raid_level": "concat", 00:11:31.754 "superblock": true, 00:11:31.754 "num_base_bdevs": 4, 00:11:31.754 "num_base_bdevs_discovered": 4, 00:11:31.754 "num_base_bdevs_operational": 4, 00:11:31.754 "base_bdevs_list": [ 00:11:31.754 { 00:11:31.754 "name": "BaseBdev1", 00:11:31.754 "uuid": "71d535c4-b863-5be8-b29a-ee4c8a9ae3bf", 00:11:31.754 "is_configured": true, 00:11:31.754 "data_offset": 2048, 00:11:31.754 "data_size": 63488 00:11:31.754 }, 00:11:31.754 { 00:11:31.754 "name": "BaseBdev2", 00:11:31.754 "uuid": "9bd7b5b7-298e-533c-b53b-46aeff57202f", 00:11:31.754 "is_configured": true, 00:11:31.755 "data_offset": 2048, 00:11:31.755 "data_size": 63488 00:11:31.755 }, 00:11:31.755 { 00:11:31.755 "name": "BaseBdev3", 00:11:31.755 "uuid": "54610778-4864-5ded-bc16-c21de0e301da", 00:11:31.755 "is_configured": true, 00:11:31.755 "data_offset": 2048, 00:11:31.755 "data_size": 63488 00:11:31.755 }, 00:11:31.755 { 00:11:31.755 "name": "BaseBdev4", 00:11:31.755 "uuid": "41d7f610-eb5f-54e0-a585-7f300ce90a93", 00:11:31.755 "is_configured": true, 00:11:31.755 "data_offset": 2048, 00:11:31.755 "data_size": 63488 00:11:31.755 } 00:11:31.755 ] 00:11:31.755 }' 00:11:31.755 10:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.755 10:56:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.322 10:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:32.322 10:56:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.322 10:56:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.322 [2024-11-15 10:56:38.967365] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:32.322 [2024-11-15 10:56:38.967439] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:32.322 [2024-11-15 10:56:38.970041] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:32.322 [2024-11-15 10:56:38.970155] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.322 [2024-11-15 10:56:38.970219] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:32.322 [2024-11-15 10:56:38.970269] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:32.322 { 00:11:32.322 "results": [ 00:11:32.322 { 00:11:32.322 "job": "raid_bdev1", 00:11:32.322 "core_mask": "0x1", 00:11:32.322 "workload": "randrw", 00:11:32.322 "percentage": 50, 00:11:32.322 "status": "finished", 00:11:32.322 "queue_depth": 1, 00:11:32.322 "io_size": 131072, 00:11:32.322 "runtime": 1.373335, 00:11:32.322 "iops": 14973.76823571816, 00:11:32.322 "mibps": 1871.72102946477, 00:11:32.322 "io_failed": 1, 00:11:32.322 "io_timeout": 0, 00:11:32.322 "avg_latency_us": 92.87210453169574, 00:11:32.322 "min_latency_us": 26.717903930131005, 00:11:32.322 "max_latency_us": 1638.4 00:11:32.322 } 00:11:32.322 ], 00:11:32.322 "core_count": 1 00:11:32.322 } 00:11:32.322 10:56:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.322 10:56:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73051 00:11:32.322 10:56:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 73051 ']' 00:11:32.322 10:56:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 73051 00:11:32.322 10:56:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:11:32.322 10:56:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:32.322 10:56:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73051 00:11:32.322 killing process with pid 73051 00:11:32.322 10:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:32.322 10:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:32.322 10:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73051' 00:11:32.322 10:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 73051 00:11:32.322 [2024-11-15 10:56:39.016176] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:32.322 10:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 73051 00:11:32.579 [2024-11-15 10:56:39.343118] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:33.954 10:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:33.954 10:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.lrHv04qtam 00:11:33.954 10:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:33.954 10:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:33.954 10:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:33.954 10:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:33.954 10:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:33.954 10:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:33.954 00:11:33.954 real 0m4.693s 00:11:33.954 user 0m5.569s 00:11:33.954 sys 0m0.580s 00:11:33.954 10:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:33.954 10:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.954 ************************************ 00:11:33.954 END TEST raid_read_error_test 00:11:33.954 ************************************ 00:11:33.954 10:56:40 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:33.954 10:56:40 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:33.954 10:56:40 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:33.954 10:56:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:33.954 ************************************ 00:11:33.954 START TEST raid_write_error_test 00:11:33.954 ************************************ 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 write 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.XySJKbFhJV 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73197 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73197 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 73197 ']' 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:33.954 10:56:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.954 [2024-11-15 10:56:40.704807] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:11:33.954 [2024-11-15 10:56:40.704924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73197 ] 00:11:34.213 [2024-11-15 10:56:40.880371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.213 [2024-11-15 10:56:41.001010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.470 [2024-11-15 10:56:41.207787] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.470 [2024-11-15 10:56:41.207841] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.727 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:34.727 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:34.727 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:34.727 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:34.727 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.727 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.727 BaseBdev1_malloc 00:11:34.727 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.727 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:34.727 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.727 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.727 true 00:11:34.727 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.727 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:34.727 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.727 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.727 [2024-11-15 10:56:41.596982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:34.727 [2024-11-15 10:56:41.597042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.727 [2024-11-15 10:56:41.597062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:34.727 [2024-11-15 10:56:41.597073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.727 [2024-11-15 10:56:41.599304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.727 [2024-11-15 10:56:41.599423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:34.727 BaseBdev1 00:11:34.727 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.727 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:34.727 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:34.727 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.727 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.727 BaseBdev2_malloc 00:11:34.727 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.727 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:34.727 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.727 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.984 true 00:11:34.984 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.984 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:34.984 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.984 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.984 [2024-11-15 10:56:41.663547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:34.984 [2024-11-15 10:56:41.663612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.984 [2024-11-15 10:56:41.663633] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:34.984 [2024-11-15 10:56:41.663645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.984 [2024-11-15 10:56:41.666014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.984 [2024-11-15 10:56:41.666064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:34.984 BaseBdev2 00:11:34.984 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.984 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:34.984 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:34.984 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.984 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.984 BaseBdev3_malloc 00:11:34.984 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.984 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:34.984 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.984 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.984 true 00:11:34.984 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.984 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:34.984 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.984 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.984 [2024-11-15 10:56:41.741247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:34.984 [2024-11-15 10:56:41.741317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.984 [2024-11-15 10:56:41.741338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:34.984 [2024-11-15 10:56:41.741350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.984 [2024-11-15 10:56:41.743655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.984 [2024-11-15 10:56:41.743697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:34.984 BaseBdev3 00:11:34.984 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.984 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.985 BaseBdev4_malloc 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.985 true 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.985 [2024-11-15 10:56:41.797608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:34.985 [2024-11-15 10:56:41.797675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.985 [2024-11-15 10:56:41.797694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:34.985 [2024-11-15 10:56:41.797705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.985 [2024-11-15 10:56:41.799944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.985 [2024-11-15 10:56:41.799988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:34.985 BaseBdev4 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.985 [2024-11-15 10:56:41.805660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:34.985 [2024-11-15 10:56:41.807645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:34.985 [2024-11-15 10:56:41.807725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:34.985 [2024-11-15 10:56:41.807798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:34.985 [2024-11-15 10:56:41.808056] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:34.985 [2024-11-15 10:56:41.808073] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:34.985 [2024-11-15 10:56:41.808359] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:34.985 [2024-11-15 10:56:41.808546] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:34.985 [2024-11-15 10:56:41.808559] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:34.985 [2024-11-15 10:56:41.808759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.985 "name": "raid_bdev1", 00:11:34.985 "uuid": "ccde1ddb-3f14-410f-89fa-1e47c9e7ffd1", 00:11:34.985 "strip_size_kb": 64, 00:11:34.985 "state": "online", 00:11:34.985 "raid_level": "concat", 00:11:34.985 "superblock": true, 00:11:34.985 "num_base_bdevs": 4, 00:11:34.985 "num_base_bdevs_discovered": 4, 00:11:34.985 "num_base_bdevs_operational": 4, 00:11:34.985 "base_bdevs_list": [ 00:11:34.985 { 00:11:34.985 "name": "BaseBdev1", 00:11:34.985 "uuid": "f4ff1fb8-07d4-51e0-b874-ea9f601baee4", 00:11:34.985 "is_configured": true, 00:11:34.985 "data_offset": 2048, 00:11:34.985 "data_size": 63488 00:11:34.985 }, 00:11:34.985 { 00:11:34.985 "name": "BaseBdev2", 00:11:34.985 "uuid": "589dbe76-1086-5bac-9faf-6fb496588958", 00:11:34.985 "is_configured": true, 00:11:34.985 "data_offset": 2048, 00:11:34.985 "data_size": 63488 00:11:34.985 }, 00:11:34.985 { 00:11:34.985 "name": "BaseBdev3", 00:11:34.985 "uuid": "f216d7f6-f1ea-5bc9-8393-6f0eef8f3b8b", 00:11:34.985 "is_configured": true, 00:11:34.985 "data_offset": 2048, 00:11:34.985 "data_size": 63488 00:11:34.985 }, 00:11:34.985 { 00:11:34.985 "name": "BaseBdev4", 00:11:34.985 "uuid": "cea75d07-5c2b-5eee-843e-a9cf7f73c0c1", 00:11:34.985 "is_configured": true, 00:11:34.985 "data_offset": 2048, 00:11:34.985 "data_size": 63488 00:11:34.985 } 00:11:34.985 ] 00:11:34.985 }' 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.985 10:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.598 10:56:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:35.598 10:56:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:35.598 [2024-11-15 10:56:42.382293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:36.534 10:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:36.534 10:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.534 10:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.534 10:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.534 10:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:36.534 10:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:36.534 10:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:36.534 10:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:36.534 10:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.534 10:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.534 10:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.534 10:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.534 10:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.534 10:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.534 10:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.534 10:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.535 10:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.535 10:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.535 10:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.535 10:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.535 10:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.535 10:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.535 10:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.535 "name": "raid_bdev1", 00:11:36.535 "uuid": "ccde1ddb-3f14-410f-89fa-1e47c9e7ffd1", 00:11:36.535 "strip_size_kb": 64, 00:11:36.535 "state": "online", 00:11:36.535 "raid_level": "concat", 00:11:36.535 "superblock": true, 00:11:36.535 "num_base_bdevs": 4, 00:11:36.535 "num_base_bdevs_discovered": 4, 00:11:36.535 "num_base_bdevs_operational": 4, 00:11:36.535 "base_bdevs_list": [ 00:11:36.535 { 00:11:36.535 "name": "BaseBdev1", 00:11:36.535 "uuid": "f4ff1fb8-07d4-51e0-b874-ea9f601baee4", 00:11:36.535 "is_configured": true, 00:11:36.535 "data_offset": 2048, 00:11:36.535 "data_size": 63488 00:11:36.535 }, 00:11:36.535 { 00:11:36.535 "name": "BaseBdev2", 00:11:36.535 "uuid": "589dbe76-1086-5bac-9faf-6fb496588958", 00:11:36.535 "is_configured": true, 00:11:36.535 "data_offset": 2048, 00:11:36.535 "data_size": 63488 00:11:36.535 }, 00:11:36.535 { 00:11:36.535 "name": "BaseBdev3", 00:11:36.535 "uuid": "f216d7f6-f1ea-5bc9-8393-6f0eef8f3b8b", 00:11:36.535 "is_configured": true, 00:11:36.535 "data_offset": 2048, 00:11:36.535 "data_size": 63488 00:11:36.535 }, 00:11:36.535 { 00:11:36.535 "name": "BaseBdev4", 00:11:36.535 "uuid": "cea75d07-5c2b-5eee-843e-a9cf7f73c0c1", 00:11:36.535 "is_configured": true, 00:11:36.535 "data_offset": 2048, 00:11:36.535 "data_size": 63488 00:11:36.535 } 00:11:36.535 ] 00:11:36.535 }' 00:11:36.535 10:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.535 10:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.099 10:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:37.099 10:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.099 10:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.099 [2024-11-15 10:56:43.735221] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:37.099 [2024-11-15 10:56:43.735343] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:37.099 [2024-11-15 10:56:43.738545] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:37.099 [2024-11-15 10:56:43.738656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.099 [2024-11-15 10:56:43.738739] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:37.099 [2024-11-15 10:56:43.738798] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:37.099 { 00:11:37.099 "results": [ 00:11:37.099 { 00:11:37.099 "job": "raid_bdev1", 00:11:37.099 "core_mask": "0x1", 00:11:37.099 "workload": "randrw", 00:11:37.099 "percentage": 50, 00:11:37.099 "status": "finished", 00:11:37.099 "queue_depth": 1, 00:11:37.099 "io_size": 131072, 00:11:37.099 "runtime": 1.353196, 00:11:37.099 "iops": 13880.47259968253, 00:11:37.099 "mibps": 1735.0590749603161, 00:11:37.099 "io_failed": 1, 00:11:37.099 "io_timeout": 0, 00:11:37.099 "avg_latency_us": 99.91079502763664, 00:11:37.099 "min_latency_us": 28.618340611353712, 00:11:37.099 "max_latency_us": 1709.9458515283843 00:11:37.099 } 00:11:37.099 ], 00:11:37.099 "core_count": 1 00:11:37.100 } 00:11:37.100 10:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.100 10:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73197 00:11:37.100 10:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 73197 ']' 00:11:37.100 10:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 73197 00:11:37.100 10:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:11:37.100 10:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:37.100 10:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73197 00:11:37.100 10:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:37.100 10:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:37.100 10:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73197' 00:11:37.100 killing process with pid 73197 00:11:37.100 10:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 73197 00:11:37.100 [2024-11-15 10:56:43.786513] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:37.100 10:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 73197 00:11:37.356 [2024-11-15 10:56:44.161614] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:38.728 10:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.XySJKbFhJV 00:11:38.728 10:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:38.728 10:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:38.728 10:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:38.728 10:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:38.728 ************************************ 00:11:38.728 END TEST raid_write_error_test 00:11:38.728 ************************************ 00:11:38.728 10:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:38.728 10:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:38.728 10:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:38.728 00:11:38.728 real 0m4.852s 00:11:38.728 user 0m5.719s 00:11:38.728 sys 0m0.582s 00:11:38.728 10:56:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:38.728 10:56:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.728 10:56:45 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:38.728 10:56:45 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:38.728 10:56:45 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:38.728 10:56:45 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:38.728 10:56:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:38.728 ************************************ 00:11:38.728 START TEST raid_state_function_test 00:11:38.728 ************************************ 00:11:38.728 10:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 false 00:11:38.728 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:38.728 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:38.728 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:38.728 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:38.728 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:38.728 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:38.728 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:38.728 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:38.728 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:38.728 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:38.728 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:38.728 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:38.728 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:38.728 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:38.729 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:38.729 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:38.729 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:38.729 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:38.729 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:38.729 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:38.729 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:38.729 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:38.729 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:38.729 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:38.729 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:38.729 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:38.729 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:38.729 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:38.729 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73340 00:11:38.729 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:38.729 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73340' 00:11:38.729 Process raid pid: 73340 00:11:38.729 10:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73340 00:11:38.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.729 10:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 73340 ']' 00:11:38.729 10:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.729 10:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:38.729 10:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.729 10:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:38.729 10:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.729 [2024-11-15 10:56:45.624081] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:11:38.729 [2024-11-15 10:56:45.624278] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.986 [2024-11-15 10:56:45.804708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.243 [2024-11-15 10:56:45.929191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.243 [2024-11-15 10:56:46.130945] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.243 [2024-11-15 10:56:46.131083] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.865 10:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:39.865 10:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:11:39.865 10:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:39.865 10:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.865 10:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.865 [2024-11-15 10:56:46.528639] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:39.865 [2024-11-15 10:56:46.528694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:39.865 [2024-11-15 10:56:46.528706] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:39.865 [2024-11-15 10:56:46.528716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:39.865 [2024-11-15 10:56:46.528722] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:39.865 [2024-11-15 10:56:46.528731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:39.865 [2024-11-15 10:56:46.528737] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:39.865 [2024-11-15 10:56:46.528745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:39.865 10:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.865 10:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:39.865 10:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.865 10:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.865 10:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.865 10:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.865 10:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.865 10:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.865 10:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.865 10:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.865 10:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.865 10:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.865 10:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.865 10:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.865 10:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.865 10:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.865 10:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.865 "name": "Existed_Raid", 00:11:39.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.865 "strip_size_kb": 0, 00:11:39.865 "state": "configuring", 00:11:39.865 "raid_level": "raid1", 00:11:39.865 "superblock": false, 00:11:39.865 "num_base_bdevs": 4, 00:11:39.865 "num_base_bdevs_discovered": 0, 00:11:39.865 "num_base_bdevs_operational": 4, 00:11:39.865 "base_bdevs_list": [ 00:11:39.865 { 00:11:39.865 "name": "BaseBdev1", 00:11:39.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.865 "is_configured": false, 00:11:39.865 "data_offset": 0, 00:11:39.865 "data_size": 0 00:11:39.865 }, 00:11:39.865 { 00:11:39.865 "name": "BaseBdev2", 00:11:39.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.865 "is_configured": false, 00:11:39.865 "data_offset": 0, 00:11:39.865 "data_size": 0 00:11:39.865 }, 00:11:39.865 { 00:11:39.865 "name": "BaseBdev3", 00:11:39.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.865 "is_configured": false, 00:11:39.865 "data_offset": 0, 00:11:39.865 "data_size": 0 00:11:39.865 }, 00:11:39.865 { 00:11:39.865 "name": "BaseBdev4", 00:11:39.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.865 "is_configured": false, 00:11:39.865 "data_offset": 0, 00:11:39.865 "data_size": 0 00:11:39.865 } 00:11:39.865 ] 00:11:39.865 }' 00:11:39.865 10:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.865 10:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.124 10:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:40.124 10:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.124 10:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.124 [2024-11-15 10:56:46.947923] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:40.124 [2024-11-15 10:56:46.948049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:40.124 10:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.124 10:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:40.124 10:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.124 10:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.124 [2024-11-15 10:56:46.959866] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:40.124 [2024-11-15 10:56:46.959974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:40.124 [2024-11-15 10:56:46.960002] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:40.124 [2024-11-15 10:56:46.960025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:40.124 [2024-11-15 10:56:46.960043] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:40.124 [2024-11-15 10:56:46.960064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:40.124 [2024-11-15 10:56:46.960082] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:40.124 [2024-11-15 10:56:46.960103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:40.124 10:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.124 10:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:40.124 10:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.124 10:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.124 [2024-11-15 10:56:47.007757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:40.124 BaseBdev1 00:11:40.124 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.124 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:40.124 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:40.124 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:40.124 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:40.124 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:40.124 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:40.124 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:40.124 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.124 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.124 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.124 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:40.124 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.124 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.124 [ 00:11:40.124 { 00:11:40.124 "name": "BaseBdev1", 00:11:40.124 "aliases": [ 00:11:40.124 "ba0e2983-3d53-4a9e-88f6-2fcefcaeaafa" 00:11:40.124 ], 00:11:40.124 "product_name": "Malloc disk", 00:11:40.124 "block_size": 512, 00:11:40.124 "num_blocks": 65536, 00:11:40.124 "uuid": "ba0e2983-3d53-4a9e-88f6-2fcefcaeaafa", 00:11:40.124 "assigned_rate_limits": { 00:11:40.124 "rw_ios_per_sec": 0, 00:11:40.124 "rw_mbytes_per_sec": 0, 00:11:40.124 "r_mbytes_per_sec": 0, 00:11:40.124 "w_mbytes_per_sec": 0 00:11:40.124 }, 00:11:40.124 "claimed": true, 00:11:40.124 "claim_type": "exclusive_write", 00:11:40.124 "zoned": false, 00:11:40.124 "supported_io_types": { 00:11:40.124 "read": true, 00:11:40.124 "write": true, 00:11:40.124 "unmap": true, 00:11:40.124 "flush": true, 00:11:40.124 "reset": true, 00:11:40.124 "nvme_admin": false, 00:11:40.124 "nvme_io": false, 00:11:40.124 "nvme_io_md": false, 00:11:40.124 "write_zeroes": true, 00:11:40.124 "zcopy": true, 00:11:40.124 "get_zone_info": false, 00:11:40.124 "zone_management": false, 00:11:40.124 "zone_append": false, 00:11:40.124 "compare": false, 00:11:40.124 "compare_and_write": false, 00:11:40.124 "abort": true, 00:11:40.124 "seek_hole": false, 00:11:40.124 "seek_data": false, 00:11:40.124 "copy": true, 00:11:40.124 "nvme_iov_md": false 00:11:40.124 }, 00:11:40.124 "memory_domains": [ 00:11:40.124 { 00:11:40.124 "dma_device_id": "system", 00:11:40.124 "dma_device_type": 1 00:11:40.124 }, 00:11:40.124 { 00:11:40.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.124 "dma_device_type": 2 00:11:40.124 } 00:11:40.124 ], 00:11:40.124 "driver_specific": {} 00:11:40.124 } 00:11:40.124 ] 00:11:40.382 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.382 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:40.382 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:40.382 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.382 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.382 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.382 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.382 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.382 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.382 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.382 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.382 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.382 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.382 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.382 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.382 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.382 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.382 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.382 "name": "Existed_Raid", 00:11:40.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.382 "strip_size_kb": 0, 00:11:40.382 "state": "configuring", 00:11:40.382 "raid_level": "raid1", 00:11:40.382 "superblock": false, 00:11:40.382 "num_base_bdevs": 4, 00:11:40.382 "num_base_bdevs_discovered": 1, 00:11:40.382 "num_base_bdevs_operational": 4, 00:11:40.382 "base_bdevs_list": [ 00:11:40.382 { 00:11:40.382 "name": "BaseBdev1", 00:11:40.382 "uuid": "ba0e2983-3d53-4a9e-88f6-2fcefcaeaafa", 00:11:40.382 "is_configured": true, 00:11:40.382 "data_offset": 0, 00:11:40.382 "data_size": 65536 00:11:40.382 }, 00:11:40.382 { 00:11:40.382 "name": "BaseBdev2", 00:11:40.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.382 "is_configured": false, 00:11:40.382 "data_offset": 0, 00:11:40.383 "data_size": 0 00:11:40.383 }, 00:11:40.383 { 00:11:40.383 "name": "BaseBdev3", 00:11:40.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.383 "is_configured": false, 00:11:40.383 "data_offset": 0, 00:11:40.383 "data_size": 0 00:11:40.383 }, 00:11:40.383 { 00:11:40.383 "name": "BaseBdev4", 00:11:40.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.383 "is_configured": false, 00:11:40.383 "data_offset": 0, 00:11:40.383 "data_size": 0 00:11:40.383 } 00:11:40.383 ] 00:11:40.383 }' 00:11:40.383 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.383 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.641 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:40.641 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.641 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.641 [2024-11-15 10:56:47.486993] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:40.641 [2024-11-15 10:56:47.487055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:40.641 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.641 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:40.641 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.641 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.641 [2024-11-15 10:56:47.499026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:40.641 [2024-11-15 10:56:47.501131] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:40.641 [2024-11-15 10:56:47.501218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:40.641 [2024-11-15 10:56:47.501255] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:40.641 [2024-11-15 10:56:47.501295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:40.641 [2024-11-15 10:56:47.501337] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:40.641 [2024-11-15 10:56:47.501381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:40.641 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.641 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:40.641 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:40.641 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:40.641 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.641 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.641 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.641 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.641 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.641 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.641 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.641 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.641 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.641 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.641 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.641 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.641 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.641 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.641 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.641 "name": "Existed_Raid", 00:11:40.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.641 "strip_size_kb": 0, 00:11:40.641 "state": "configuring", 00:11:40.641 "raid_level": "raid1", 00:11:40.641 "superblock": false, 00:11:40.641 "num_base_bdevs": 4, 00:11:40.641 "num_base_bdevs_discovered": 1, 00:11:40.641 "num_base_bdevs_operational": 4, 00:11:40.641 "base_bdevs_list": [ 00:11:40.641 { 00:11:40.641 "name": "BaseBdev1", 00:11:40.641 "uuid": "ba0e2983-3d53-4a9e-88f6-2fcefcaeaafa", 00:11:40.641 "is_configured": true, 00:11:40.641 "data_offset": 0, 00:11:40.641 "data_size": 65536 00:11:40.641 }, 00:11:40.641 { 00:11:40.641 "name": "BaseBdev2", 00:11:40.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.641 "is_configured": false, 00:11:40.641 "data_offset": 0, 00:11:40.641 "data_size": 0 00:11:40.641 }, 00:11:40.641 { 00:11:40.641 "name": "BaseBdev3", 00:11:40.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.641 "is_configured": false, 00:11:40.641 "data_offset": 0, 00:11:40.641 "data_size": 0 00:11:40.641 }, 00:11:40.641 { 00:11:40.641 "name": "BaseBdev4", 00:11:40.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.641 "is_configured": false, 00:11:40.641 "data_offset": 0, 00:11:40.641 "data_size": 0 00:11:40.641 } 00:11:40.641 ] 00:11:40.642 }' 00:11:40.642 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.642 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.209 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:41.209 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.209 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.209 [2024-11-15 10:56:47.986165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:41.209 BaseBdev2 00:11:41.209 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.209 10:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:41.209 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:41.209 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:41.209 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:41.209 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:41.209 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:41.209 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:41.209 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.209 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.209 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.209 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:41.209 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.209 10:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.209 [ 00:11:41.209 { 00:11:41.209 "name": "BaseBdev2", 00:11:41.209 "aliases": [ 00:11:41.209 "f9911001-978e-4c03-a3f1-3cf24e0c364e" 00:11:41.209 ], 00:11:41.209 "product_name": "Malloc disk", 00:11:41.209 "block_size": 512, 00:11:41.209 "num_blocks": 65536, 00:11:41.209 "uuid": "f9911001-978e-4c03-a3f1-3cf24e0c364e", 00:11:41.209 "assigned_rate_limits": { 00:11:41.209 "rw_ios_per_sec": 0, 00:11:41.209 "rw_mbytes_per_sec": 0, 00:11:41.209 "r_mbytes_per_sec": 0, 00:11:41.209 "w_mbytes_per_sec": 0 00:11:41.209 }, 00:11:41.209 "claimed": true, 00:11:41.209 "claim_type": "exclusive_write", 00:11:41.209 "zoned": false, 00:11:41.209 "supported_io_types": { 00:11:41.209 "read": true, 00:11:41.209 "write": true, 00:11:41.209 "unmap": true, 00:11:41.209 "flush": true, 00:11:41.209 "reset": true, 00:11:41.209 "nvme_admin": false, 00:11:41.209 "nvme_io": false, 00:11:41.209 "nvme_io_md": false, 00:11:41.209 "write_zeroes": true, 00:11:41.209 "zcopy": true, 00:11:41.209 "get_zone_info": false, 00:11:41.209 "zone_management": false, 00:11:41.209 "zone_append": false, 00:11:41.209 "compare": false, 00:11:41.209 "compare_and_write": false, 00:11:41.209 "abort": true, 00:11:41.209 "seek_hole": false, 00:11:41.209 "seek_data": false, 00:11:41.209 "copy": true, 00:11:41.209 "nvme_iov_md": false 00:11:41.209 }, 00:11:41.209 "memory_domains": [ 00:11:41.209 { 00:11:41.209 "dma_device_id": "system", 00:11:41.209 "dma_device_type": 1 00:11:41.209 }, 00:11:41.209 { 00:11:41.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.209 "dma_device_type": 2 00:11:41.209 } 00:11:41.209 ], 00:11:41.209 "driver_specific": {} 00:11:41.209 } 00:11:41.209 ] 00:11:41.209 10:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.209 10:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:41.209 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:41.209 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:41.209 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:41.209 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.209 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.209 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.209 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.209 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.209 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.209 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.209 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.209 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.209 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.209 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.209 10:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.209 10:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.209 10:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.209 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.209 "name": "Existed_Raid", 00:11:41.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.209 "strip_size_kb": 0, 00:11:41.209 "state": "configuring", 00:11:41.209 "raid_level": "raid1", 00:11:41.209 "superblock": false, 00:11:41.209 "num_base_bdevs": 4, 00:11:41.209 "num_base_bdevs_discovered": 2, 00:11:41.209 "num_base_bdevs_operational": 4, 00:11:41.209 "base_bdevs_list": [ 00:11:41.209 { 00:11:41.209 "name": "BaseBdev1", 00:11:41.209 "uuid": "ba0e2983-3d53-4a9e-88f6-2fcefcaeaafa", 00:11:41.209 "is_configured": true, 00:11:41.209 "data_offset": 0, 00:11:41.209 "data_size": 65536 00:11:41.209 }, 00:11:41.209 { 00:11:41.209 "name": "BaseBdev2", 00:11:41.209 "uuid": "f9911001-978e-4c03-a3f1-3cf24e0c364e", 00:11:41.209 "is_configured": true, 00:11:41.209 "data_offset": 0, 00:11:41.209 "data_size": 65536 00:11:41.209 }, 00:11:41.209 { 00:11:41.209 "name": "BaseBdev3", 00:11:41.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.209 "is_configured": false, 00:11:41.209 "data_offset": 0, 00:11:41.209 "data_size": 0 00:11:41.209 }, 00:11:41.209 { 00:11:41.209 "name": "BaseBdev4", 00:11:41.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.209 "is_configured": false, 00:11:41.209 "data_offset": 0, 00:11:41.209 "data_size": 0 00:11:41.209 } 00:11:41.209 ] 00:11:41.209 }' 00:11:41.209 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.209 10:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.776 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:41.776 10:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.776 10:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.776 BaseBdev3 00:11:41.776 [2024-11-15 10:56:48.535691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:41.776 10:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.776 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:41.776 10:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:41.776 10:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:41.776 10:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:41.776 10:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:41.776 10:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:41.776 10:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:41.777 10:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.777 10:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.777 10:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.777 10:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:41.777 10:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.777 10:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.777 [ 00:11:41.777 { 00:11:41.777 "name": "BaseBdev3", 00:11:41.777 "aliases": [ 00:11:41.777 "22416227-75e3-4fc7-b7d2-244022b54a60" 00:11:41.777 ], 00:11:41.777 "product_name": "Malloc disk", 00:11:41.777 "block_size": 512, 00:11:41.777 "num_blocks": 65536, 00:11:41.777 "uuid": "22416227-75e3-4fc7-b7d2-244022b54a60", 00:11:41.777 "assigned_rate_limits": { 00:11:41.777 "rw_ios_per_sec": 0, 00:11:41.777 "rw_mbytes_per_sec": 0, 00:11:41.777 "r_mbytes_per_sec": 0, 00:11:41.777 "w_mbytes_per_sec": 0 00:11:41.777 }, 00:11:41.777 "claimed": true, 00:11:41.777 "claim_type": "exclusive_write", 00:11:41.777 "zoned": false, 00:11:41.777 "supported_io_types": { 00:11:41.777 "read": true, 00:11:41.777 "write": true, 00:11:41.777 "unmap": true, 00:11:41.777 "flush": true, 00:11:41.777 "reset": true, 00:11:41.777 "nvme_admin": false, 00:11:41.777 "nvme_io": false, 00:11:41.777 "nvme_io_md": false, 00:11:41.777 "write_zeroes": true, 00:11:41.777 "zcopy": true, 00:11:41.777 "get_zone_info": false, 00:11:41.777 "zone_management": false, 00:11:41.777 "zone_append": false, 00:11:41.777 "compare": false, 00:11:41.777 "compare_and_write": false, 00:11:41.777 "abort": true, 00:11:41.777 "seek_hole": false, 00:11:41.777 "seek_data": false, 00:11:41.777 "copy": true, 00:11:41.777 "nvme_iov_md": false 00:11:41.777 }, 00:11:41.777 "memory_domains": [ 00:11:41.777 { 00:11:41.777 "dma_device_id": "system", 00:11:41.777 "dma_device_type": 1 00:11:41.777 }, 00:11:41.777 { 00:11:41.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.777 "dma_device_type": 2 00:11:41.777 } 00:11:41.777 ], 00:11:41.777 "driver_specific": {} 00:11:41.777 } 00:11:41.777 ] 00:11:41.777 10:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.777 10:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:41.777 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:41.777 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:41.777 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:41.777 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.777 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.777 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.777 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.777 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.777 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.777 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.777 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.777 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.777 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.777 10:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.777 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.777 10:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.777 10:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.777 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.777 "name": "Existed_Raid", 00:11:41.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.777 "strip_size_kb": 0, 00:11:41.777 "state": "configuring", 00:11:41.777 "raid_level": "raid1", 00:11:41.777 "superblock": false, 00:11:41.777 "num_base_bdevs": 4, 00:11:41.777 "num_base_bdevs_discovered": 3, 00:11:41.777 "num_base_bdevs_operational": 4, 00:11:41.777 "base_bdevs_list": [ 00:11:41.777 { 00:11:41.777 "name": "BaseBdev1", 00:11:41.777 "uuid": "ba0e2983-3d53-4a9e-88f6-2fcefcaeaafa", 00:11:41.777 "is_configured": true, 00:11:41.777 "data_offset": 0, 00:11:41.777 "data_size": 65536 00:11:41.777 }, 00:11:41.777 { 00:11:41.777 "name": "BaseBdev2", 00:11:41.777 "uuid": "f9911001-978e-4c03-a3f1-3cf24e0c364e", 00:11:41.777 "is_configured": true, 00:11:41.777 "data_offset": 0, 00:11:41.777 "data_size": 65536 00:11:41.777 }, 00:11:41.777 { 00:11:41.777 "name": "BaseBdev3", 00:11:41.777 "uuid": "22416227-75e3-4fc7-b7d2-244022b54a60", 00:11:41.777 "is_configured": true, 00:11:41.777 "data_offset": 0, 00:11:41.777 "data_size": 65536 00:11:41.777 }, 00:11:41.777 { 00:11:41.777 "name": "BaseBdev4", 00:11:41.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.777 "is_configured": false, 00:11:41.777 "data_offset": 0, 00:11:41.777 "data_size": 0 00:11:41.777 } 00:11:41.777 ] 00:11:41.777 }' 00:11:41.777 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.777 10:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.344 10:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:42.344 10:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.344 10:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.344 [2024-11-15 10:56:49.016455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:42.344 [2024-11-15 10:56:49.016511] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:42.344 [2024-11-15 10:56:49.016519] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:42.344 [2024-11-15 10:56:49.016814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:42.344 [2024-11-15 10:56:49.016985] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:42.344 [2024-11-15 10:56:49.016998] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:42.344 [2024-11-15 10:56:49.017272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.344 BaseBdev4 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.344 [ 00:11:42.344 { 00:11:42.344 "name": "BaseBdev4", 00:11:42.344 "aliases": [ 00:11:42.344 "b937b356-7a2f-4091-8259-97e0ee1456ad" 00:11:42.344 ], 00:11:42.344 "product_name": "Malloc disk", 00:11:42.344 "block_size": 512, 00:11:42.344 "num_blocks": 65536, 00:11:42.344 "uuid": "b937b356-7a2f-4091-8259-97e0ee1456ad", 00:11:42.344 "assigned_rate_limits": { 00:11:42.344 "rw_ios_per_sec": 0, 00:11:42.344 "rw_mbytes_per_sec": 0, 00:11:42.344 "r_mbytes_per_sec": 0, 00:11:42.344 "w_mbytes_per_sec": 0 00:11:42.344 }, 00:11:42.344 "claimed": true, 00:11:42.344 "claim_type": "exclusive_write", 00:11:42.344 "zoned": false, 00:11:42.344 "supported_io_types": { 00:11:42.344 "read": true, 00:11:42.344 "write": true, 00:11:42.344 "unmap": true, 00:11:42.344 "flush": true, 00:11:42.344 "reset": true, 00:11:42.344 "nvme_admin": false, 00:11:42.344 "nvme_io": false, 00:11:42.344 "nvme_io_md": false, 00:11:42.344 "write_zeroes": true, 00:11:42.344 "zcopy": true, 00:11:42.344 "get_zone_info": false, 00:11:42.344 "zone_management": false, 00:11:42.344 "zone_append": false, 00:11:42.344 "compare": false, 00:11:42.344 "compare_and_write": false, 00:11:42.344 "abort": true, 00:11:42.344 "seek_hole": false, 00:11:42.344 "seek_data": false, 00:11:42.344 "copy": true, 00:11:42.344 "nvme_iov_md": false 00:11:42.344 }, 00:11:42.344 "memory_domains": [ 00:11:42.344 { 00:11:42.344 "dma_device_id": "system", 00:11:42.344 "dma_device_type": 1 00:11:42.344 }, 00:11:42.344 { 00:11:42.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.344 "dma_device_type": 2 00:11:42.344 } 00:11:42.344 ], 00:11:42.344 "driver_specific": {} 00:11:42.344 } 00:11:42.344 ] 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.344 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.344 "name": "Existed_Raid", 00:11:42.344 "uuid": "ecd37154-c7dc-4094-9c0d-280441b392e7", 00:11:42.344 "strip_size_kb": 0, 00:11:42.344 "state": "online", 00:11:42.344 "raid_level": "raid1", 00:11:42.344 "superblock": false, 00:11:42.344 "num_base_bdevs": 4, 00:11:42.344 "num_base_bdevs_discovered": 4, 00:11:42.344 "num_base_bdevs_operational": 4, 00:11:42.344 "base_bdevs_list": [ 00:11:42.344 { 00:11:42.344 "name": "BaseBdev1", 00:11:42.344 "uuid": "ba0e2983-3d53-4a9e-88f6-2fcefcaeaafa", 00:11:42.344 "is_configured": true, 00:11:42.344 "data_offset": 0, 00:11:42.344 "data_size": 65536 00:11:42.344 }, 00:11:42.344 { 00:11:42.344 "name": "BaseBdev2", 00:11:42.344 "uuid": "f9911001-978e-4c03-a3f1-3cf24e0c364e", 00:11:42.344 "is_configured": true, 00:11:42.344 "data_offset": 0, 00:11:42.345 "data_size": 65536 00:11:42.345 }, 00:11:42.345 { 00:11:42.345 "name": "BaseBdev3", 00:11:42.345 "uuid": "22416227-75e3-4fc7-b7d2-244022b54a60", 00:11:42.345 "is_configured": true, 00:11:42.345 "data_offset": 0, 00:11:42.345 "data_size": 65536 00:11:42.345 }, 00:11:42.345 { 00:11:42.345 "name": "BaseBdev4", 00:11:42.345 "uuid": "b937b356-7a2f-4091-8259-97e0ee1456ad", 00:11:42.345 "is_configured": true, 00:11:42.345 "data_offset": 0, 00:11:42.345 "data_size": 65536 00:11:42.345 } 00:11:42.345 ] 00:11:42.345 }' 00:11:42.345 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.345 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.603 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:42.603 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:42.603 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:42.603 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:42.603 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:42.603 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:42.603 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:42.603 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:42.603 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.603 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.603 [2024-11-15 10:56:49.488129] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:42.603 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.603 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:42.603 "name": "Existed_Raid", 00:11:42.603 "aliases": [ 00:11:42.603 "ecd37154-c7dc-4094-9c0d-280441b392e7" 00:11:42.603 ], 00:11:42.603 "product_name": "Raid Volume", 00:11:42.603 "block_size": 512, 00:11:42.603 "num_blocks": 65536, 00:11:42.603 "uuid": "ecd37154-c7dc-4094-9c0d-280441b392e7", 00:11:42.603 "assigned_rate_limits": { 00:11:42.603 "rw_ios_per_sec": 0, 00:11:42.603 "rw_mbytes_per_sec": 0, 00:11:42.603 "r_mbytes_per_sec": 0, 00:11:42.603 "w_mbytes_per_sec": 0 00:11:42.603 }, 00:11:42.603 "claimed": false, 00:11:42.603 "zoned": false, 00:11:42.603 "supported_io_types": { 00:11:42.603 "read": true, 00:11:42.603 "write": true, 00:11:42.603 "unmap": false, 00:11:42.603 "flush": false, 00:11:42.603 "reset": true, 00:11:42.603 "nvme_admin": false, 00:11:42.603 "nvme_io": false, 00:11:42.603 "nvme_io_md": false, 00:11:42.603 "write_zeroes": true, 00:11:42.603 "zcopy": false, 00:11:42.603 "get_zone_info": false, 00:11:42.603 "zone_management": false, 00:11:42.603 "zone_append": false, 00:11:42.603 "compare": false, 00:11:42.603 "compare_and_write": false, 00:11:42.603 "abort": false, 00:11:42.603 "seek_hole": false, 00:11:42.603 "seek_data": false, 00:11:42.603 "copy": false, 00:11:42.603 "nvme_iov_md": false 00:11:42.603 }, 00:11:42.603 "memory_domains": [ 00:11:42.603 { 00:11:42.603 "dma_device_id": "system", 00:11:42.603 "dma_device_type": 1 00:11:42.603 }, 00:11:42.603 { 00:11:42.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.603 "dma_device_type": 2 00:11:42.603 }, 00:11:42.603 { 00:11:42.603 "dma_device_id": "system", 00:11:42.603 "dma_device_type": 1 00:11:42.603 }, 00:11:42.603 { 00:11:42.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.603 "dma_device_type": 2 00:11:42.603 }, 00:11:42.603 { 00:11:42.603 "dma_device_id": "system", 00:11:42.603 "dma_device_type": 1 00:11:42.603 }, 00:11:42.603 { 00:11:42.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.603 "dma_device_type": 2 00:11:42.603 }, 00:11:42.603 { 00:11:42.603 "dma_device_id": "system", 00:11:42.603 "dma_device_type": 1 00:11:42.603 }, 00:11:42.603 { 00:11:42.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.603 "dma_device_type": 2 00:11:42.603 } 00:11:42.603 ], 00:11:42.603 "driver_specific": { 00:11:42.603 "raid": { 00:11:42.603 "uuid": "ecd37154-c7dc-4094-9c0d-280441b392e7", 00:11:42.603 "strip_size_kb": 0, 00:11:42.603 "state": "online", 00:11:42.603 "raid_level": "raid1", 00:11:42.603 "superblock": false, 00:11:42.603 "num_base_bdevs": 4, 00:11:42.603 "num_base_bdevs_discovered": 4, 00:11:42.603 "num_base_bdevs_operational": 4, 00:11:42.603 "base_bdevs_list": [ 00:11:42.603 { 00:11:42.603 "name": "BaseBdev1", 00:11:42.603 "uuid": "ba0e2983-3d53-4a9e-88f6-2fcefcaeaafa", 00:11:42.603 "is_configured": true, 00:11:42.603 "data_offset": 0, 00:11:42.604 "data_size": 65536 00:11:42.604 }, 00:11:42.604 { 00:11:42.604 "name": "BaseBdev2", 00:11:42.604 "uuid": "f9911001-978e-4c03-a3f1-3cf24e0c364e", 00:11:42.604 "is_configured": true, 00:11:42.604 "data_offset": 0, 00:11:42.604 "data_size": 65536 00:11:42.604 }, 00:11:42.604 { 00:11:42.604 "name": "BaseBdev3", 00:11:42.604 "uuid": "22416227-75e3-4fc7-b7d2-244022b54a60", 00:11:42.604 "is_configured": true, 00:11:42.604 "data_offset": 0, 00:11:42.604 "data_size": 65536 00:11:42.604 }, 00:11:42.604 { 00:11:42.604 "name": "BaseBdev4", 00:11:42.604 "uuid": "b937b356-7a2f-4091-8259-97e0ee1456ad", 00:11:42.604 "is_configured": true, 00:11:42.604 "data_offset": 0, 00:11:42.604 "data_size": 65536 00:11:42.604 } 00:11:42.604 ] 00:11:42.604 } 00:11:42.604 } 00:11:42.604 }' 00:11:42.604 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:42.862 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:42.862 BaseBdev2 00:11:42.862 BaseBdev3 00:11:42.862 BaseBdev4' 00:11:42.862 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.862 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:42.862 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.862 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:42.862 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.862 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.862 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.862 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.862 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.862 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.863 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.863 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:42.863 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.863 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.863 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.863 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.863 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.863 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.863 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.863 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.863 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:42.863 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.863 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.863 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.863 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.863 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.863 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.863 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:42.863 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.863 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.863 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.863 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.121 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.121 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.121 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:43.121 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.121 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.121 [2024-11-15 10:56:49.803307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:43.121 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.121 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:43.121 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:43.121 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:43.121 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:43.121 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:43.121 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:43.121 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.121 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.121 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.121 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.121 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:43.121 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.121 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.121 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.121 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.121 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.121 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.121 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.121 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.121 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.121 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.121 "name": "Existed_Raid", 00:11:43.122 "uuid": "ecd37154-c7dc-4094-9c0d-280441b392e7", 00:11:43.122 "strip_size_kb": 0, 00:11:43.122 "state": "online", 00:11:43.122 "raid_level": "raid1", 00:11:43.122 "superblock": false, 00:11:43.122 "num_base_bdevs": 4, 00:11:43.122 "num_base_bdevs_discovered": 3, 00:11:43.122 "num_base_bdevs_operational": 3, 00:11:43.122 "base_bdevs_list": [ 00:11:43.122 { 00:11:43.122 "name": null, 00:11:43.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.122 "is_configured": false, 00:11:43.122 "data_offset": 0, 00:11:43.122 "data_size": 65536 00:11:43.122 }, 00:11:43.122 { 00:11:43.122 "name": "BaseBdev2", 00:11:43.122 "uuid": "f9911001-978e-4c03-a3f1-3cf24e0c364e", 00:11:43.122 "is_configured": true, 00:11:43.122 "data_offset": 0, 00:11:43.122 "data_size": 65536 00:11:43.122 }, 00:11:43.122 { 00:11:43.122 "name": "BaseBdev3", 00:11:43.122 "uuid": "22416227-75e3-4fc7-b7d2-244022b54a60", 00:11:43.122 "is_configured": true, 00:11:43.122 "data_offset": 0, 00:11:43.122 "data_size": 65536 00:11:43.122 }, 00:11:43.122 { 00:11:43.122 "name": "BaseBdev4", 00:11:43.122 "uuid": "b937b356-7a2f-4091-8259-97e0ee1456ad", 00:11:43.122 "is_configured": true, 00:11:43.122 "data_offset": 0, 00:11:43.122 "data_size": 65536 00:11:43.122 } 00:11:43.122 ] 00:11:43.122 }' 00:11:43.122 10:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.122 10:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.688 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:43.688 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:43.688 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.688 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.688 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:43.688 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.688 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.688 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:43.688 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:43.688 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:43.689 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.689 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.689 [2024-11-15 10:56:50.353474] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:43.689 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.689 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:43.689 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:43.689 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.689 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:43.689 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.689 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.689 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.689 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:43.689 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:43.689 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:43.689 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.689 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.689 [2024-11-15 10:56:50.493813] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:43.689 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.689 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:43.689 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:43.689 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.689 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:43.689 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.689 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.947 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.947 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:43.947 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:43.947 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:43.947 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.947 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.947 [2024-11-15 10:56:50.656531] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:43.947 [2024-11-15 10:56:50.656633] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:43.947 [2024-11-15 10:56:50.757817] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.947 [2024-11-15 10:56:50.757877] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.947 [2024-11-15 10:56:50.757891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:43.948 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.948 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:43.948 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:43.948 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.948 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.948 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:43.948 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.948 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.948 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:43.948 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:43.948 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:43.948 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:43.948 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:43.948 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:43.948 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.948 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.948 BaseBdev2 00:11:43.948 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.948 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:43.948 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:43.948 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:43.948 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:43.948 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:43.948 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:43.948 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:43.948 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.948 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.948 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.233 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:44.233 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.233 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.233 [ 00:11:44.233 { 00:11:44.233 "name": "BaseBdev2", 00:11:44.233 "aliases": [ 00:11:44.233 "4746be4f-17eb-4d78-a316-482d688ae03f" 00:11:44.233 ], 00:11:44.233 "product_name": "Malloc disk", 00:11:44.233 "block_size": 512, 00:11:44.233 "num_blocks": 65536, 00:11:44.233 "uuid": "4746be4f-17eb-4d78-a316-482d688ae03f", 00:11:44.233 "assigned_rate_limits": { 00:11:44.233 "rw_ios_per_sec": 0, 00:11:44.233 "rw_mbytes_per_sec": 0, 00:11:44.233 "r_mbytes_per_sec": 0, 00:11:44.233 "w_mbytes_per_sec": 0 00:11:44.233 }, 00:11:44.233 "claimed": false, 00:11:44.233 "zoned": false, 00:11:44.233 "supported_io_types": { 00:11:44.233 "read": true, 00:11:44.233 "write": true, 00:11:44.233 "unmap": true, 00:11:44.233 "flush": true, 00:11:44.233 "reset": true, 00:11:44.233 "nvme_admin": false, 00:11:44.233 "nvme_io": false, 00:11:44.233 "nvme_io_md": false, 00:11:44.233 "write_zeroes": true, 00:11:44.233 "zcopy": true, 00:11:44.233 "get_zone_info": false, 00:11:44.233 "zone_management": false, 00:11:44.233 "zone_append": false, 00:11:44.233 "compare": false, 00:11:44.233 "compare_and_write": false, 00:11:44.233 "abort": true, 00:11:44.233 "seek_hole": false, 00:11:44.233 "seek_data": false, 00:11:44.233 "copy": true, 00:11:44.233 "nvme_iov_md": false 00:11:44.233 }, 00:11:44.233 "memory_domains": [ 00:11:44.233 { 00:11:44.233 "dma_device_id": "system", 00:11:44.233 "dma_device_type": 1 00:11:44.233 }, 00:11:44.233 { 00:11:44.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.233 "dma_device_type": 2 00:11:44.233 } 00:11:44.233 ], 00:11:44.233 "driver_specific": {} 00:11:44.233 } 00:11:44.233 ] 00:11:44.233 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.233 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:44.233 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:44.233 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:44.233 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:44.233 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.233 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.233 BaseBdev3 00:11:44.233 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.233 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:44.233 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:44.233 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:44.233 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:44.233 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:44.233 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:44.233 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:44.233 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.233 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.233 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.233 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:44.233 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.233 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.233 [ 00:11:44.233 { 00:11:44.233 "name": "BaseBdev3", 00:11:44.233 "aliases": [ 00:11:44.233 "06c66363-fec3-4428-9b72-346c3cb3e5fb" 00:11:44.233 ], 00:11:44.233 "product_name": "Malloc disk", 00:11:44.233 "block_size": 512, 00:11:44.233 "num_blocks": 65536, 00:11:44.233 "uuid": "06c66363-fec3-4428-9b72-346c3cb3e5fb", 00:11:44.233 "assigned_rate_limits": { 00:11:44.233 "rw_ios_per_sec": 0, 00:11:44.233 "rw_mbytes_per_sec": 0, 00:11:44.233 "r_mbytes_per_sec": 0, 00:11:44.233 "w_mbytes_per_sec": 0 00:11:44.233 }, 00:11:44.233 "claimed": false, 00:11:44.233 "zoned": false, 00:11:44.233 "supported_io_types": { 00:11:44.233 "read": true, 00:11:44.233 "write": true, 00:11:44.233 "unmap": true, 00:11:44.233 "flush": true, 00:11:44.233 "reset": true, 00:11:44.233 "nvme_admin": false, 00:11:44.233 "nvme_io": false, 00:11:44.233 "nvme_io_md": false, 00:11:44.233 "write_zeroes": true, 00:11:44.233 "zcopy": true, 00:11:44.233 "get_zone_info": false, 00:11:44.233 "zone_management": false, 00:11:44.233 "zone_append": false, 00:11:44.233 "compare": false, 00:11:44.233 "compare_and_write": false, 00:11:44.233 "abort": true, 00:11:44.233 "seek_hole": false, 00:11:44.233 "seek_data": false, 00:11:44.233 "copy": true, 00:11:44.233 "nvme_iov_md": false 00:11:44.233 }, 00:11:44.233 "memory_domains": [ 00:11:44.233 { 00:11:44.233 "dma_device_id": "system", 00:11:44.233 "dma_device_type": 1 00:11:44.233 }, 00:11:44.233 { 00:11:44.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.233 "dma_device_type": 2 00:11:44.233 } 00:11:44.233 ], 00:11:44.233 "driver_specific": {} 00:11:44.233 } 00:11:44.233 ] 00:11:44.234 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.234 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:44.234 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:44.234 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:44.234 10:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:44.234 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.234 10:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.234 BaseBdev4 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.234 [ 00:11:44.234 { 00:11:44.234 "name": "BaseBdev4", 00:11:44.234 "aliases": [ 00:11:44.234 "551f4ba9-71c2-435e-ae7a-2221f93d7c02" 00:11:44.234 ], 00:11:44.234 "product_name": "Malloc disk", 00:11:44.234 "block_size": 512, 00:11:44.234 "num_blocks": 65536, 00:11:44.234 "uuid": "551f4ba9-71c2-435e-ae7a-2221f93d7c02", 00:11:44.234 "assigned_rate_limits": { 00:11:44.234 "rw_ios_per_sec": 0, 00:11:44.234 "rw_mbytes_per_sec": 0, 00:11:44.234 "r_mbytes_per_sec": 0, 00:11:44.234 "w_mbytes_per_sec": 0 00:11:44.234 }, 00:11:44.234 "claimed": false, 00:11:44.234 "zoned": false, 00:11:44.234 "supported_io_types": { 00:11:44.234 "read": true, 00:11:44.234 "write": true, 00:11:44.234 "unmap": true, 00:11:44.234 "flush": true, 00:11:44.234 "reset": true, 00:11:44.234 "nvme_admin": false, 00:11:44.234 "nvme_io": false, 00:11:44.234 "nvme_io_md": false, 00:11:44.234 "write_zeroes": true, 00:11:44.234 "zcopy": true, 00:11:44.234 "get_zone_info": false, 00:11:44.234 "zone_management": false, 00:11:44.234 "zone_append": false, 00:11:44.234 "compare": false, 00:11:44.234 "compare_and_write": false, 00:11:44.234 "abort": true, 00:11:44.234 "seek_hole": false, 00:11:44.234 "seek_data": false, 00:11:44.234 "copy": true, 00:11:44.234 "nvme_iov_md": false 00:11:44.234 }, 00:11:44.234 "memory_domains": [ 00:11:44.234 { 00:11:44.234 "dma_device_id": "system", 00:11:44.234 "dma_device_type": 1 00:11:44.234 }, 00:11:44.234 { 00:11:44.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.234 "dma_device_type": 2 00:11:44.234 } 00:11:44.234 ], 00:11:44.234 "driver_specific": {} 00:11:44.234 } 00:11:44.234 ] 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.234 [2024-11-15 10:56:51.068363] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:44.234 [2024-11-15 10:56:51.068496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:44.234 [2024-11-15 10:56:51.068554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:44.234 [2024-11-15 10:56:51.070571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:44.234 [2024-11-15 10:56:51.070670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.234 "name": "Existed_Raid", 00:11:44.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.234 "strip_size_kb": 0, 00:11:44.234 "state": "configuring", 00:11:44.234 "raid_level": "raid1", 00:11:44.234 "superblock": false, 00:11:44.234 "num_base_bdevs": 4, 00:11:44.234 "num_base_bdevs_discovered": 3, 00:11:44.234 "num_base_bdevs_operational": 4, 00:11:44.234 "base_bdevs_list": [ 00:11:44.234 { 00:11:44.234 "name": "BaseBdev1", 00:11:44.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.234 "is_configured": false, 00:11:44.234 "data_offset": 0, 00:11:44.234 "data_size": 0 00:11:44.234 }, 00:11:44.234 { 00:11:44.234 "name": "BaseBdev2", 00:11:44.234 "uuid": "4746be4f-17eb-4d78-a316-482d688ae03f", 00:11:44.234 "is_configured": true, 00:11:44.234 "data_offset": 0, 00:11:44.234 "data_size": 65536 00:11:44.234 }, 00:11:44.234 { 00:11:44.234 "name": "BaseBdev3", 00:11:44.234 "uuid": "06c66363-fec3-4428-9b72-346c3cb3e5fb", 00:11:44.234 "is_configured": true, 00:11:44.234 "data_offset": 0, 00:11:44.234 "data_size": 65536 00:11:44.234 }, 00:11:44.234 { 00:11:44.234 "name": "BaseBdev4", 00:11:44.234 "uuid": "551f4ba9-71c2-435e-ae7a-2221f93d7c02", 00:11:44.234 "is_configured": true, 00:11:44.234 "data_offset": 0, 00:11:44.234 "data_size": 65536 00:11:44.234 } 00:11:44.234 ] 00:11:44.234 }' 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.234 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.799 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:44.799 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.799 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.799 [2024-11-15 10:56:51.515663] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:44.799 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.799 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:44.799 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.799 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.799 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.799 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.799 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.799 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.799 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.799 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.799 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.799 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.799 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.799 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.799 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.799 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.799 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.799 "name": "Existed_Raid", 00:11:44.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.799 "strip_size_kb": 0, 00:11:44.799 "state": "configuring", 00:11:44.799 "raid_level": "raid1", 00:11:44.799 "superblock": false, 00:11:44.799 "num_base_bdevs": 4, 00:11:44.799 "num_base_bdevs_discovered": 2, 00:11:44.799 "num_base_bdevs_operational": 4, 00:11:44.799 "base_bdevs_list": [ 00:11:44.799 { 00:11:44.799 "name": "BaseBdev1", 00:11:44.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.799 "is_configured": false, 00:11:44.799 "data_offset": 0, 00:11:44.799 "data_size": 0 00:11:44.799 }, 00:11:44.799 { 00:11:44.799 "name": null, 00:11:44.799 "uuid": "4746be4f-17eb-4d78-a316-482d688ae03f", 00:11:44.799 "is_configured": false, 00:11:44.799 "data_offset": 0, 00:11:44.799 "data_size": 65536 00:11:44.799 }, 00:11:44.799 { 00:11:44.799 "name": "BaseBdev3", 00:11:44.799 "uuid": "06c66363-fec3-4428-9b72-346c3cb3e5fb", 00:11:44.799 "is_configured": true, 00:11:44.799 "data_offset": 0, 00:11:44.799 "data_size": 65536 00:11:44.799 }, 00:11:44.799 { 00:11:44.799 "name": "BaseBdev4", 00:11:44.799 "uuid": "551f4ba9-71c2-435e-ae7a-2221f93d7c02", 00:11:44.799 "is_configured": true, 00:11:44.799 "data_offset": 0, 00:11:44.799 "data_size": 65536 00:11:44.799 } 00:11:44.799 ] 00:11:44.799 }' 00:11:44.799 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.799 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.058 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:45.058 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.058 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.058 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.058 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.316 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:45.316 10:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:45.316 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.316 10:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.316 [2024-11-15 10:56:52.034877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.316 BaseBdev1 00:11:45.316 10:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.316 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:45.316 10:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:45.316 10:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:45.316 10:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:45.316 10:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:45.316 10:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:45.316 10:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:45.316 10:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.316 10:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.316 10:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.316 10:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:45.316 10:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.316 10:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.316 [ 00:11:45.316 { 00:11:45.316 "name": "BaseBdev1", 00:11:45.316 "aliases": [ 00:11:45.316 "571b81fa-e138-4d30-9965-63c47676ea4e" 00:11:45.316 ], 00:11:45.316 "product_name": "Malloc disk", 00:11:45.316 "block_size": 512, 00:11:45.316 "num_blocks": 65536, 00:11:45.316 "uuid": "571b81fa-e138-4d30-9965-63c47676ea4e", 00:11:45.316 "assigned_rate_limits": { 00:11:45.316 "rw_ios_per_sec": 0, 00:11:45.316 "rw_mbytes_per_sec": 0, 00:11:45.316 "r_mbytes_per_sec": 0, 00:11:45.316 "w_mbytes_per_sec": 0 00:11:45.316 }, 00:11:45.316 "claimed": true, 00:11:45.316 "claim_type": "exclusive_write", 00:11:45.316 "zoned": false, 00:11:45.316 "supported_io_types": { 00:11:45.316 "read": true, 00:11:45.316 "write": true, 00:11:45.316 "unmap": true, 00:11:45.316 "flush": true, 00:11:45.316 "reset": true, 00:11:45.316 "nvme_admin": false, 00:11:45.316 "nvme_io": false, 00:11:45.316 "nvme_io_md": false, 00:11:45.316 "write_zeroes": true, 00:11:45.316 "zcopy": true, 00:11:45.316 "get_zone_info": false, 00:11:45.316 "zone_management": false, 00:11:45.316 "zone_append": false, 00:11:45.316 "compare": false, 00:11:45.316 "compare_and_write": false, 00:11:45.316 "abort": true, 00:11:45.316 "seek_hole": false, 00:11:45.316 "seek_data": false, 00:11:45.316 "copy": true, 00:11:45.316 "nvme_iov_md": false 00:11:45.316 }, 00:11:45.316 "memory_domains": [ 00:11:45.317 { 00:11:45.317 "dma_device_id": "system", 00:11:45.317 "dma_device_type": 1 00:11:45.317 }, 00:11:45.317 { 00:11:45.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.317 "dma_device_type": 2 00:11:45.317 } 00:11:45.317 ], 00:11:45.317 "driver_specific": {} 00:11:45.317 } 00:11:45.317 ] 00:11:45.317 10:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.317 10:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:45.317 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:45.317 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.317 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.317 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.317 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.317 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.317 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.317 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.317 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.317 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.317 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.317 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.317 10:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.317 10:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.317 10:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.317 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.317 "name": "Existed_Raid", 00:11:45.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.317 "strip_size_kb": 0, 00:11:45.317 "state": "configuring", 00:11:45.317 "raid_level": "raid1", 00:11:45.317 "superblock": false, 00:11:45.317 "num_base_bdevs": 4, 00:11:45.317 "num_base_bdevs_discovered": 3, 00:11:45.317 "num_base_bdevs_operational": 4, 00:11:45.317 "base_bdevs_list": [ 00:11:45.317 { 00:11:45.317 "name": "BaseBdev1", 00:11:45.317 "uuid": "571b81fa-e138-4d30-9965-63c47676ea4e", 00:11:45.317 "is_configured": true, 00:11:45.317 "data_offset": 0, 00:11:45.317 "data_size": 65536 00:11:45.317 }, 00:11:45.317 { 00:11:45.317 "name": null, 00:11:45.317 "uuid": "4746be4f-17eb-4d78-a316-482d688ae03f", 00:11:45.317 "is_configured": false, 00:11:45.317 "data_offset": 0, 00:11:45.317 "data_size": 65536 00:11:45.317 }, 00:11:45.317 { 00:11:45.317 "name": "BaseBdev3", 00:11:45.317 "uuid": "06c66363-fec3-4428-9b72-346c3cb3e5fb", 00:11:45.317 "is_configured": true, 00:11:45.317 "data_offset": 0, 00:11:45.317 "data_size": 65536 00:11:45.317 }, 00:11:45.317 { 00:11:45.317 "name": "BaseBdev4", 00:11:45.317 "uuid": "551f4ba9-71c2-435e-ae7a-2221f93d7c02", 00:11:45.317 "is_configured": true, 00:11:45.317 "data_offset": 0, 00:11:45.317 "data_size": 65536 00:11:45.317 } 00:11:45.317 ] 00:11:45.317 }' 00:11:45.317 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.317 10:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.883 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.883 10:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.883 10:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.883 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:45.883 10:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.883 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:45.883 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:45.883 10:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.883 10:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.883 [2024-11-15 10:56:52.594060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:45.883 10:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.883 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:45.883 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.883 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.883 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.883 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.883 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.883 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.883 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.883 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.883 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.883 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.883 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.883 10:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.883 10:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.883 10:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.883 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.883 "name": "Existed_Raid", 00:11:45.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.883 "strip_size_kb": 0, 00:11:45.883 "state": "configuring", 00:11:45.883 "raid_level": "raid1", 00:11:45.883 "superblock": false, 00:11:45.883 "num_base_bdevs": 4, 00:11:45.883 "num_base_bdevs_discovered": 2, 00:11:45.883 "num_base_bdevs_operational": 4, 00:11:45.883 "base_bdevs_list": [ 00:11:45.883 { 00:11:45.883 "name": "BaseBdev1", 00:11:45.883 "uuid": "571b81fa-e138-4d30-9965-63c47676ea4e", 00:11:45.883 "is_configured": true, 00:11:45.883 "data_offset": 0, 00:11:45.883 "data_size": 65536 00:11:45.883 }, 00:11:45.883 { 00:11:45.883 "name": null, 00:11:45.883 "uuid": "4746be4f-17eb-4d78-a316-482d688ae03f", 00:11:45.883 "is_configured": false, 00:11:45.883 "data_offset": 0, 00:11:45.883 "data_size": 65536 00:11:45.883 }, 00:11:45.883 { 00:11:45.883 "name": null, 00:11:45.883 "uuid": "06c66363-fec3-4428-9b72-346c3cb3e5fb", 00:11:45.883 "is_configured": false, 00:11:45.883 "data_offset": 0, 00:11:45.883 "data_size": 65536 00:11:45.883 }, 00:11:45.883 { 00:11:45.883 "name": "BaseBdev4", 00:11:45.883 "uuid": "551f4ba9-71c2-435e-ae7a-2221f93d7c02", 00:11:45.883 "is_configured": true, 00:11:45.883 "data_offset": 0, 00:11:45.883 "data_size": 65536 00:11:45.883 } 00:11:45.883 ] 00:11:45.883 }' 00:11:45.883 10:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.883 10:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.141 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:46.141 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.141 10:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.141 10:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.400 10:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.400 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:46.400 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:46.400 10:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.400 10:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.400 [2024-11-15 10:56:53.081201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:46.400 10:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.400 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:46.400 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.400 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.400 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.400 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.400 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.400 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.400 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.400 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.400 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.400 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.400 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.400 10:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.400 10:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.400 10:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.400 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.400 "name": "Existed_Raid", 00:11:46.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.400 "strip_size_kb": 0, 00:11:46.400 "state": "configuring", 00:11:46.400 "raid_level": "raid1", 00:11:46.400 "superblock": false, 00:11:46.400 "num_base_bdevs": 4, 00:11:46.400 "num_base_bdevs_discovered": 3, 00:11:46.400 "num_base_bdevs_operational": 4, 00:11:46.400 "base_bdevs_list": [ 00:11:46.400 { 00:11:46.400 "name": "BaseBdev1", 00:11:46.400 "uuid": "571b81fa-e138-4d30-9965-63c47676ea4e", 00:11:46.400 "is_configured": true, 00:11:46.400 "data_offset": 0, 00:11:46.400 "data_size": 65536 00:11:46.400 }, 00:11:46.400 { 00:11:46.400 "name": null, 00:11:46.400 "uuid": "4746be4f-17eb-4d78-a316-482d688ae03f", 00:11:46.400 "is_configured": false, 00:11:46.400 "data_offset": 0, 00:11:46.400 "data_size": 65536 00:11:46.400 }, 00:11:46.400 { 00:11:46.400 "name": "BaseBdev3", 00:11:46.400 "uuid": "06c66363-fec3-4428-9b72-346c3cb3e5fb", 00:11:46.400 "is_configured": true, 00:11:46.400 "data_offset": 0, 00:11:46.400 "data_size": 65536 00:11:46.400 }, 00:11:46.400 { 00:11:46.400 "name": "BaseBdev4", 00:11:46.400 "uuid": "551f4ba9-71c2-435e-ae7a-2221f93d7c02", 00:11:46.400 "is_configured": true, 00:11:46.400 "data_offset": 0, 00:11:46.400 "data_size": 65536 00:11:46.400 } 00:11:46.400 ] 00:11:46.400 }' 00:11:46.400 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.400 10:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.659 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:46.659 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.659 10:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.659 10:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.659 10:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.659 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:46.659 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:46.659 10:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.659 10:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.659 [2024-11-15 10:56:53.528502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:46.918 10:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.918 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:46.918 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.918 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.918 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.918 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.918 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.918 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.918 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.918 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.918 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.918 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.918 10:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.918 10:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.918 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.918 10:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.918 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.918 "name": "Existed_Raid", 00:11:46.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.918 "strip_size_kb": 0, 00:11:46.918 "state": "configuring", 00:11:46.918 "raid_level": "raid1", 00:11:46.918 "superblock": false, 00:11:46.918 "num_base_bdevs": 4, 00:11:46.918 "num_base_bdevs_discovered": 2, 00:11:46.918 "num_base_bdevs_operational": 4, 00:11:46.918 "base_bdevs_list": [ 00:11:46.918 { 00:11:46.918 "name": null, 00:11:46.918 "uuid": "571b81fa-e138-4d30-9965-63c47676ea4e", 00:11:46.918 "is_configured": false, 00:11:46.918 "data_offset": 0, 00:11:46.918 "data_size": 65536 00:11:46.918 }, 00:11:46.918 { 00:11:46.918 "name": null, 00:11:46.918 "uuid": "4746be4f-17eb-4d78-a316-482d688ae03f", 00:11:46.918 "is_configured": false, 00:11:46.918 "data_offset": 0, 00:11:46.918 "data_size": 65536 00:11:46.918 }, 00:11:46.918 { 00:11:46.918 "name": "BaseBdev3", 00:11:46.918 "uuid": "06c66363-fec3-4428-9b72-346c3cb3e5fb", 00:11:46.918 "is_configured": true, 00:11:46.918 "data_offset": 0, 00:11:46.918 "data_size": 65536 00:11:46.918 }, 00:11:46.918 { 00:11:46.918 "name": "BaseBdev4", 00:11:46.918 "uuid": "551f4ba9-71c2-435e-ae7a-2221f93d7c02", 00:11:46.918 "is_configured": true, 00:11:46.918 "data_offset": 0, 00:11:46.918 "data_size": 65536 00:11:46.918 } 00:11:46.918 ] 00:11:46.918 }' 00:11:46.918 10:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.918 10:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.177 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.435 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:47.435 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.435 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.435 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.435 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:47.435 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:47.435 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.435 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.435 [2024-11-15 10:56:54.134093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:47.435 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.435 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:47.435 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.435 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.435 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.435 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.435 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.435 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.435 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.435 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.435 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.435 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.435 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.435 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.435 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.435 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.435 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.435 "name": "Existed_Raid", 00:11:47.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.435 "strip_size_kb": 0, 00:11:47.435 "state": "configuring", 00:11:47.435 "raid_level": "raid1", 00:11:47.435 "superblock": false, 00:11:47.435 "num_base_bdevs": 4, 00:11:47.435 "num_base_bdevs_discovered": 3, 00:11:47.435 "num_base_bdevs_operational": 4, 00:11:47.435 "base_bdevs_list": [ 00:11:47.435 { 00:11:47.435 "name": null, 00:11:47.435 "uuid": "571b81fa-e138-4d30-9965-63c47676ea4e", 00:11:47.435 "is_configured": false, 00:11:47.435 "data_offset": 0, 00:11:47.435 "data_size": 65536 00:11:47.435 }, 00:11:47.435 { 00:11:47.435 "name": "BaseBdev2", 00:11:47.435 "uuid": "4746be4f-17eb-4d78-a316-482d688ae03f", 00:11:47.436 "is_configured": true, 00:11:47.436 "data_offset": 0, 00:11:47.436 "data_size": 65536 00:11:47.436 }, 00:11:47.436 { 00:11:47.436 "name": "BaseBdev3", 00:11:47.436 "uuid": "06c66363-fec3-4428-9b72-346c3cb3e5fb", 00:11:47.436 "is_configured": true, 00:11:47.436 "data_offset": 0, 00:11:47.436 "data_size": 65536 00:11:47.436 }, 00:11:47.436 { 00:11:47.436 "name": "BaseBdev4", 00:11:47.436 "uuid": "551f4ba9-71c2-435e-ae7a-2221f93d7c02", 00:11:47.436 "is_configured": true, 00:11:47.436 "data_offset": 0, 00:11:47.436 "data_size": 65536 00:11:47.436 } 00:11:47.436 ] 00:11:47.436 }' 00:11:47.436 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.436 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.693 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:47.693 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.693 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.693 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.693 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.693 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:47.693 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.693 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.693 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:47.693 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.960 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.960 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 571b81fa-e138-4d30-9965-63c47676ea4e 00:11:47.960 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.960 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.960 [2024-11-15 10:56:54.674519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:47.960 [2024-11-15 10:56:54.674668] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:47.960 [2024-11-15 10:56:54.674697] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:47.960 [2024-11-15 10:56:54.675039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:47.960 [2024-11-15 10:56:54.675278] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:47.960 [2024-11-15 10:56:54.675295] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:47.960 [2024-11-15 10:56:54.675658] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.960 NewBaseBdev 00:11:47.960 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.960 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:47.960 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:47.960 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:47.960 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:47.960 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:47.960 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:47.960 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:47.960 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.960 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.960 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.960 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:47.960 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.960 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.960 [ 00:11:47.960 { 00:11:47.960 "name": "NewBaseBdev", 00:11:47.960 "aliases": [ 00:11:47.960 "571b81fa-e138-4d30-9965-63c47676ea4e" 00:11:47.960 ], 00:11:47.960 "product_name": "Malloc disk", 00:11:47.960 "block_size": 512, 00:11:47.960 "num_blocks": 65536, 00:11:47.960 "uuid": "571b81fa-e138-4d30-9965-63c47676ea4e", 00:11:47.960 "assigned_rate_limits": { 00:11:47.960 "rw_ios_per_sec": 0, 00:11:47.960 "rw_mbytes_per_sec": 0, 00:11:47.960 "r_mbytes_per_sec": 0, 00:11:47.960 "w_mbytes_per_sec": 0 00:11:47.960 }, 00:11:47.960 "claimed": true, 00:11:47.960 "claim_type": "exclusive_write", 00:11:47.960 "zoned": false, 00:11:47.960 "supported_io_types": { 00:11:47.960 "read": true, 00:11:47.960 "write": true, 00:11:47.960 "unmap": true, 00:11:47.960 "flush": true, 00:11:47.960 "reset": true, 00:11:47.960 "nvme_admin": false, 00:11:47.960 "nvme_io": false, 00:11:47.960 "nvme_io_md": false, 00:11:47.960 "write_zeroes": true, 00:11:47.960 "zcopy": true, 00:11:47.960 "get_zone_info": false, 00:11:47.960 "zone_management": false, 00:11:47.960 "zone_append": false, 00:11:47.960 "compare": false, 00:11:47.960 "compare_and_write": false, 00:11:47.960 "abort": true, 00:11:47.960 "seek_hole": false, 00:11:47.960 "seek_data": false, 00:11:47.960 "copy": true, 00:11:47.960 "nvme_iov_md": false 00:11:47.960 }, 00:11:47.960 "memory_domains": [ 00:11:47.960 { 00:11:47.960 "dma_device_id": "system", 00:11:47.960 "dma_device_type": 1 00:11:47.960 }, 00:11:47.960 { 00:11:47.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.960 "dma_device_type": 2 00:11:47.960 } 00:11:47.960 ], 00:11:47.960 "driver_specific": {} 00:11:47.960 } 00:11:47.960 ] 00:11:47.960 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.960 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:47.960 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:47.960 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.960 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.960 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.961 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.961 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.961 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.961 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.961 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.961 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.961 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.961 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.961 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.961 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.961 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.961 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.961 "name": "Existed_Raid", 00:11:47.961 "uuid": "258f1dc6-f9ee-49d4-b30c-a46cf64d8a3d", 00:11:47.961 "strip_size_kb": 0, 00:11:47.961 "state": "online", 00:11:47.961 "raid_level": "raid1", 00:11:47.961 "superblock": false, 00:11:47.961 "num_base_bdevs": 4, 00:11:47.961 "num_base_bdevs_discovered": 4, 00:11:47.961 "num_base_bdevs_operational": 4, 00:11:47.961 "base_bdevs_list": [ 00:11:47.961 { 00:11:47.961 "name": "NewBaseBdev", 00:11:47.961 "uuid": "571b81fa-e138-4d30-9965-63c47676ea4e", 00:11:47.961 "is_configured": true, 00:11:47.961 "data_offset": 0, 00:11:47.961 "data_size": 65536 00:11:47.961 }, 00:11:47.961 { 00:11:47.961 "name": "BaseBdev2", 00:11:47.961 "uuid": "4746be4f-17eb-4d78-a316-482d688ae03f", 00:11:47.961 "is_configured": true, 00:11:47.961 "data_offset": 0, 00:11:47.961 "data_size": 65536 00:11:47.961 }, 00:11:47.961 { 00:11:47.961 "name": "BaseBdev3", 00:11:47.961 "uuid": "06c66363-fec3-4428-9b72-346c3cb3e5fb", 00:11:47.961 "is_configured": true, 00:11:47.961 "data_offset": 0, 00:11:47.961 "data_size": 65536 00:11:47.961 }, 00:11:47.961 { 00:11:47.961 "name": "BaseBdev4", 00:11:47.961 "uuid": "551f4ba9-71c2-435e-ae7a-2221f93d7c02", 00:11:47.961 "is_configured": true, 00:11:47.961 "data_offset": 0, 00:11:47.961 "data_size": 65536 00:11:47.961 } 00:11:47.961 ] 00:11:47.961 }' 00:11:47.961 10:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.961 10:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.221 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:48.222 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:48.222 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:48.222 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:48.222 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:48.222 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:48.480 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:48.480 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:48.480 10:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.480 10:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.480 [2024-11-15 10:56:55.158074] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.481 10:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.481 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:48.481 "name": "Existed_Raid", 00:11:48.481 "aliases": [ 00:11:48.481 "258f1dc6-f9ee-49d4-b30c-a46cf64d8a3d" 00:11:48.481 ], 00:11:48.481 "product_name": "Raid Volume", 00:11:48.481 "block_size": 512, 00:11:48.481 "num_blocks": 65536, 00:11:48.481 "uuid": "258f1dc6-f9ee-49d4-b30c-a46cf64d8a3d", 00:11:48.481 "assigned_rate_limits": { 00:11:48.481 "rw_ios_per_sec": 0, 00:11:48.481 "rw_mbytes_per_sec": 0, 00:11:48.481 "r_mbytes_per_sec": 0, 00:11:48.481 "w_mbytes_per_sec": 0 00:11:48.481 }, 00:11:48.481 "claimed": false, 00:11:48.481 "zoned": false, 00:11:48.481 "supported_io_types": { 00:11:48.481 "read": true, 00:11:48.481 "write": true, 00:11:48.481 "unmap": false, 00:11:48.481 "flush": false, 00:11:48.481 "reset": true, 00:11:48.481 "nvme_admin": false, 00:11:48.481 "nvme_io": false, 00:11:48.481 "nvme_io_md": false, 00:11:48.481 "write_zeroes": true, 00:11:48.481 "zcopy": false, 00:11:48.481 "get_zone_info": false, 00:11:48.481 "zone_management": false, 00:11:48.481 "zone_append": false, 00:11:48.481 "compare": false, 00:11:48.481 "compare_and_write": false, 00:11:48.481 "abort": false, 00:11:48.481 "seek_hole": false, 00:11:48.481 "seek_data": false, 00:11:48.481 "copy": false, 00:11:48.481 "nvme_iov_md": false 00:11:48.481 }, 00:11:48.481 "memory_domains": [ 00:11:48.481 { 00:11:48.481 "dma_device_id": "system", 00:11:48.481 "dma_device_type": 1 00:11:48.481 }, 00:11:48.481 { 00:11:48.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.481 "dma_device_type": 2 00:11:48.481 }, 00:11:48.481 { 00:11:48.481 "dma_device_id": "system", 00:11:48.481 "dma_device_type": 1 00:11:48.481 }, 00:11:48.481 { 00:11:48.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.481 "dma_device_type": 2 00:11:48.481 }, 00:11:48.481 { 00:11:48.481 "dma_device_id": "system", 00:11:48.481 "dma_device_type": 1 00:11:48.481 }, 00:11:48.481 { 00:11:48.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.481 "dma_device_type": 2 00:11:48.481 }, 00:11:48.481 { 00:11:48.481 "dma_device_id": "system", 00:11:48.481 "dma_device_type": 1 00:11:48.481 }, 00:11:48.481 { 00:11:48.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.481 "dma_device_type": 2 00:11:48.481 } 00:11:48.481 ], 00:11:48.481 "driver_specific": { 00:11:48.481 "raid": { 00:11:48.481 "uuid": "258f1dc6-f9ee-49d4-b30c-a46cf64d8a3d", 00:11:48.481 "strip_size_kb": 0, 00:11:48.481 "state": "online", 00:11:48.481 "raid_level": "raid1", 00:11:48.481 "superblock": false, 00:11:48.481 "num_base_bdevs": 4, 00:11:48.481 "num_base_bdevs_discovered": 4, 00:11:48.481 "num_base_bdevs_operational": 4, 00:11:48.481 "base_bdevs_list": [ 00:11:48.481 { 00:11:48.481 "name": "NewBaseBdev", 00:11:48.481 "uuid": "571b81fa-e138-4d30-9965-63c47676ea4e", 00:11:48.481 "is_configured": true, 00:11:48.481 "data_offset": 0, 00:11:48.481 "data_size": 65536 00:11:48.481 }, 00:11:48.481 { 00:11:48.481 "name": "BaseBdev2", 00:11:48.481 "uuid": "4746be4f-17eb-4d78-a316-482d688ae03f", 00:11:48.481 "is_configured": true, 00:11:48.481 "data_offset": 0, 00:11:48.481 "data_size": 65536 00:11:48.481 }, 00:11:48.481 { 00:11:48.481 "name": "BaseBdev3", 00:11:48.481 "uuid": "06c66363-fec3-4428-9b72-346c3cb3e5fb", 00:11:48.481 "is_configured": true, 00:11:48.481 "data_offset": 0, 00:11:48.481 "data_size": 65536 00:11:48.481 }, 00:11:48.481 { 00:11:48.481 "name": "BaseBdev4", 00:11:48.481 "uuid": "551f4ba9-71c2-435e-ae7a-2221f93d7c02", 00:11:48.481 "is_configured": true, 00:11:48.481 "data_offset": 0, 00:11:48.481 "data_size": 65536 00:11:48.481 } 00:11:48.481 ] 00:11:48.481 } 00:11:48.481 } 00:11:48.481 }' 00:11:48.481 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:48.481 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:48.481 BaseBdev2 00:11:48.481 BaseBdev3 00:11:48.481 BaseBdev4' 00:11:48.481 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.481 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:48.481 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.481 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:48.481 10:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.481 10:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.481 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.481 10:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.481 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.481 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.481 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.481 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.481 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:48.481 10:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.481 10:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.481 10:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.481 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.481 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.481 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.481 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.481 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:48.481 10:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.481 10:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.739 10:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.739 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.740 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.740 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.740 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:48.740 10:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.740 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.740 10:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.740 10:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.740 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.740 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.740 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:48.740 10:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.740 10:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.740 [2024-11-15 10:56:55.461236] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:48.740 [2024-11-15 10:56:55.461270] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.740 [2024-11-15 10:56:55.461383] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.740 [2024-11-15 10:56:55.461705] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.740 [2024-11-15 10:56:55.461725] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:48.740 10:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.740 10:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73340 00:11:48.740 10:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 73340 ']' 00:11:48.740 10:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 73340 00:11:48.740 10:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:11:48.740 10:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:48.740 10:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73340 00:11:48.740 killing process with pid 73340 00:11:48.740 10:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:48.740 10:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:48.740 10:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73340' 00:11:48.740 10:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 73340 00:11:48.740 [2024-11-15 10:56:55.508969] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:48.740 10:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 73340 00:11:49.316 [2024-11-15 10:56:55.922492] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:50.250 ************************************ 00:11:50.250 END TEST raid_state_function_test 00:11:50.250 ************************************ 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:50.250 00:11:50.250 real 0m11.534s 00:11:50.250 user 0m18.287s 00:11:50.250 sys 0m2.013s 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.250 10:56:57 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:50.250 10:56:57 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:50.250 10:56:57 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:50.250 10:56:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:50.250 ************************************ 00:11:50.250 START TEST raid_state_function_test_sb 00:11:50.250 ************************************ 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 true 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74013 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:50.250 Process raid pid: 74013 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74013' 00:11:50.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74013 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 74013 ']' 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:50.250 10:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.510 [2024-11-15 10:56:57.241069] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:11:50.510 [2024-11-15 10:56:57.241348] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.768 [2024-11-15 10:56:57.435087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.768 [2024-11-15 10:56:57.554275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.026 [2024-11-15 10:56:57.768578] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.026 [2024-11-15 10:56:57.768672] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.286 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:51.286 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:11:51.286 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:51.286 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.286 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.286 [2024-11-15 10:56:58.143498] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:51.286 [2024-11-15 10:56:58.143643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:51.286 [2024-11-15 10:56:58.143660] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:51.286 [2024-11-15 10:56:58.143672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:51.286 [2024-11-15 10:56:58.143680] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:51.286 [2024-11-15 10:56:58.143690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:51.286 [2024-11-15 10:56:58.143703] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:51.286 [2024-11-15 10:56:58.143713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:51.286 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.286 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:51.286 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.286 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.286 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.286 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.286 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.286 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.286 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.286 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.286 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.286 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.286 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.286 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.286 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.286 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.286 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.286 "name": "Existed_Raid", 00:11:51.286 "uuid": "ecddcc47-d377-46bf-875e-0c43f31c5264", 00:11:51.286 "strip_size_kb": 0, 00:11:51.286 "state": "configuring", 00:11:51.286 "raid_level": "raid1", 00:11:51.286 "superblock": true, 00:11:51.286 "num_base_bdevs": 4, 00:11:51.286 "num_base_bdevs_discovered": 0, 00:11:51.286 "num_base_bdevs_operational": 4, 00:11:51.286 "base_bdevs_list": [ 00:11:51.286 { 00:11:51.286 "name": "BaseBdev1", 00:11:51.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.286 "is_configured": false, 00:11:51.286 "data_offset": 0, 00:11:51.286 "data_size": 0 00:11:51.286 }, 00:11:51.286 { 00:11:51.286 "name": "BaseBdev2", 00:11:51.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.286 "is_configured": false, 00:11:51.286 "data_offset": 0, 00:11:51.286 "data_size": 0 00:11:51.286 }, 00:11:51.286 { 00:11:51.286 "name": "BaseBdev3", 00:11:51.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.286 "is_configured": false, 00:11:51.286 "data_offset": 0, 00:11:51.286 "data_size": 0 00:11:51.286 }, 00:11:51.286 { 00:11:51.286 "name": "BaseBdev4", 00:11:51.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.286 "is_configured": false, 00:11:51.286 "data_offset": 0, 00:11:51.286 "data_size": 0 00:11:51.286 } 00:11:51.286 ] 00:11:51.286 }' 00:11:51.286 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.286 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.853 [2024-11-15 10:56:58.578668] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:51.853 [2024-11-15 10:56:58.578763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.853 [2024-11-15 10:56:58.590622] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:51.853 [2024-11-15 10:56:58.590699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:51.853 [2024-11-15 10:56:58.590724] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:51.853 [2024-11-15 10:56:58.590746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:51.853 [2024-11-15 10:56:58.590764] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:51.853 [2024-11-15 10:56:58.590784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:51.853 [2024-11-15 10:56:58.590801] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:51.853 [2024-11-15 10:56:58.590836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.853 [2024-11-15 10:56:58.640245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.853 BaseBdev1 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.853 [ 00:11:51.853 { 00:11:51.853 "name": "BaseBdev1", 00:11:51.853 "aliases": [ 00:11:51.853 "5398dd8a-1f44-4e3a-8526-c3f99b4e5afe" 00:11:51.853 ], 00:11:51.853 "product_name": "Malloc disk", 00:11:51.853 "block_size": 512, 00:11:51.853 "num_blocks": 65536, 00:11:51.853 "uuid": "5398dd8a-1f44-4e3a-8526-c3f99b4e5afe", 00:11:51.853 "assigned_rate_limits": { 00:11:51.853 "rw_ios_per_sec": 0, 00:11:51.853 "rw_mbytes_per_sec": 0, 00:11:51.853 "r_mbytes_per_sec": 0, 00:11:51.853 "w_mbytes_per_sec": 0 00:11:51.853 }, 00:11:51.853 "claimed": true, 00:11:51.853 "claim_type": "exclusive_write", 00:11:51.853 "zoned": false, 00:11:51.853 "supported_io_types": { 00:11:51.853 "read": true, 00:11:51.853 "write": true, 00:11:51.853 "unmap": true, 00:11:51.853 "flush": true, 00:11:51.853 "reset": true, 00:11:51.853 "nvme_admin": false, 00:11:51.853 "nvme_io": false, 00:11:51.853 "nvme_io_md": false, 00:11:51.853 "write_zeroes": true, 00:11:51.853 "zcopy": true, 00:11:51.853 "get_zone_info": false, 00:11:51.853 "zone_management": false, 00:11:51.853 "zone_append": false, 00:11:51.853 "compare": false, 00:11:51.853 "compare_and_write": false, 00:11:51.853 "abort": true, 00:11:51.853 "seek_hole": false, 00:11:51.853 "seek_data": false, 00:11:51.853 "copy": true, 00:11:51.853 "nvme_iov_md": false 00:11:51.853 }, 00:11:51.853 "memory_domains": [ 00:11:51.853 { 00:11:51.853 "dma_device_id": "system", 00:11:51.853 "dma_device_type": 1 00:11:51.853 }, 00:11:51.853 { 00:11:51.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.853 "dma_device_type": 2 00:11:51.853 } 00:11:51.853 ], 00:11:51.853 "driver_specific": {} 00:11:51.853 } 00:11:51.853 ] 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.853 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.853 "name": "Existed_Raid", 00:11:51.853 "uuid": "4d2c6e9a-24c3-4a07-95bf-7f32e2e1472b", 00:11:51.853 "strip_size_kb": 0, 00:11:51.853 "state": "configuring", 00:11:51.853 "raid_level": "raid1", 00:11:51.853 "superblock": true, 00:11:51.853 "num_base_bdevs": 4, 00:11:51.853 "num_base_bdevs_discovered": 1, 00:11:51.853 "num_base_bdevs_operational": 4, 00:11:51.853 "base_bdevs_list": [ 00:11:51.853 { 00:11:51.853 "name": "BaseBdev1", 00:11:51.853 "uuid": "5398dd8a-1f44-4e3a-8526-c3f99b4e5afe", 00:11:51.853 "is_configured": true, 00:11:51.853 "data_offset": 2048, 00:11:51.853 "data_size": 63488 00:11:51.853 }, 00:11:51.853 { 00:11:51.853 "name": "BaseBdev2", 00:11:51.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.854 "is_configured": false, 00:11:51.854 "data_offset": 0, 00:11:51.854 "data_size": 0 00:11:51.854 }, 00:11:51.854 { 00:11:51.854 "name": "BaseBdev3", 00:11:51.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.854 "is_configured": false, 00:11:51.854 "data_offset": 0, 00:11:51.854 "data_size": 0 00:11:51.854 }, 00:11:51.854 { 00:11:51.854 "name": "BaseBdev4", 00:11:51.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.854 "is_configured": false, 00:11:51.854 "data_offset": 0, 00:11:51.854 "data_size": 0 00:11:51.854 } 00:11:51.854 ] 00:11:51.854 }' 00:11:51.854 10:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.854 10:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.420 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:52.420 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.420 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.420 [2024-11-15 10:56:59.099552] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:52.420 [2024-11-15 10:56:59.099672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:52.420 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.420 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:52.420 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.420 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.420 [2024-11-15 10:56:59.107605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.420 [2024-11-15 10:56:59.109683] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:52.420 [2024-11-15 10:56:59.109724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:52.420 [2024-11-15 10:56:59.109735] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:52.420 [2024-11-15 10:56:59.109764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:52.420 [2024-11-15 10:56:59.109772] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:52.420 [2024-11-15 10:56:59.109782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:52.420 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.420 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:52.420 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:52.420 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:52.420 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.420 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.420 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.420 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.420 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.420 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.420 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.420 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.420 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.420 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.420 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.420 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.420 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.420 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.420 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.420 "name": "Existed_Raid", 00:11:52.420 "uuid": "82b9d126-92ec-454e-907b-0ef7afbf0a70", 00:11:52.420 "strip_size_kb": 0, 00:11:52.420 "state": "configuring", 00:11:52.420 "raid_level": "raid1", 00:11:52.420 "superblock": true, 00:11:52.420 "num_base_bdevs": 4, 00:11:52.420 "num_base_bdevs_discovered": 1, 00:11:52.420 "num_base_bdevs_operational": 4, 00:11:52.420 "base_bdevs_list": [ 00:11:52.420 { 00:11:52.420 "name": "BaseBdev1", 00:11:52.420 "uuid": "5398dd8a-1f44-4e3a-8526-c3f99b4e5afe", 00:11:52.420 "is_configured": true, 00:11:52.420 "data_offset": 2048, 00:11:52.420 "data_size": 63488 00:11:52.420 }, 00:11:52.420 { 00:11:52.420 "name": "BaseBdev2", 00:11:52.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.420 "is_configured": false, 00:11:52.420 "data_offset": 0, 00:11:52.420 "data_size": 0 00:11:52.420 }, 00:11:52.420 { 00:11:52.420 "name": "BaseBdev3", 00:11:52.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.420 "is_configured": false, 00:11:52.420 "data_offset": 0, 00:11:52.420 "data_size": 0 00:11:52.420 }, 00:11:52.420 { 00:11:52.420 "name": "BaseBdev4", 00:11:52.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.420 "is_configured": false, 00:11:52.420 "data_offset": 0, 00:11:52.420 "data_size": 0 00:11:52.420 } 00:11:52.420 ] 00:11:52.420 }' 00:11:52.420 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.420 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.679 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:52.679 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.679 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.679 [2024-11-15 10:56:59.569676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:52.679 BaseBdev2 00:11:52.679 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.679 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:52.679 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:52.679 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:52.679 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:52.679 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:52.679 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:52.679 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:52.679 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.679 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.679 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.679 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:52.679 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.679 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.679 [ 00:11:52.679 { 00:11:52.679 "name": "BaseBdev2", 00:11:52.679 "aliases": [ 00:11:52.679 "73031e85-d3da-486b-a137-f0a7c3549462" 00:11:52.679 ], 00:11:52.679 "product_name": "Malloc disk", 00:11:52.679 "block_size": 512, 00:11:52.679 "num_blocks": 65536, 00:11:52.679 "uuid": "73031e85-d3da-486b-a137-f0a7c3549462", 00:11:52.679 "assigned_rate_limits": { 00:11:52.679 "rw_ios_per_sec": 0, 00:11:52.679 "rw_mbytes_per_sec": 0, 00:11:52.679 "r_mbytes_per_sec": 0, 00:11:52.679 "w_mbytes_per_sec": 0 00:11:52.679 }, 00:11:52.679 "claimed": true, 00:11:52.679 "claim_type": "exclusive_write", 00:11:52.679 "zoned": false, 00:11:52.679 "supported_io_types": { 00:11:52.679 "read": true, 00:11:52.679 "write": true, 00:11:52.679 "unmap": true, 00:11:52.679 "flush": true, 00:11:52.679 "reset": true, 00:11:52.679 "nvme_admin": false, 00:11:52.679 "nvme_io": false, 00:11:52.679 "nvme_io_md": false, 00:11:52.679 "write_zeroes": true, 00:11:52.679 "zcopy": true, 00:11:52.679 "get_zone_info": false, 00:11:52.679 "zone_management": false, 00:11:52.679 "zone_append": false, 00:11:52.679 "compare": false, 00:11:52.679 "compare_and_write": false, 00:11:52.679 "abort": true, 00:11:52.679 "seek_hole": false, 00:11:52.679 "seek_data": false, 00:11:52.679 "copy": true, 00:11:52.679 "nvme_iov_md": false 00:11:52.679 }, 00:11:52.679 "memory_domains": [ 00:11:52.679 { 00:11:52.938 "dma_device_id": "system", 00:11:52.938 "dma_device_type": 1 00:11:52.938 }, 00:11:52.938 { 00:11:52.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.938 "dma_device_type": 2 00:11:52.938 } 00:11:52.938 ], 00:11:52.938 "driver_specific": {} 00:11:52.938 } 00:11:52.938 ] 00:11:52.938 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.938 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:52.938 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:52.938 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:52.938 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:52.938 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.938 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.938 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.938 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.938 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.938 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.938 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.938 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.938 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.938 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.938 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.938 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.938 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.938 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.938 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.938 "name": "Existed_Raid", 00:11:52.938 "uuid": "82b9d126-92ec-454e-907b-0ef7afbf0a70", 00:11:52.938 "strip_size_kb": 0, 00:11:52.938 "state": "configuring", 00:11:52.938 "raid_level": "raid1", 00:11:52.938 "superblock": true, 00:11:52.938 "num_base_bdevs": 4, 00:11:52.938 "num_base_bdevs_discovered": 2, 00:11:52.938 "num_base_bdevs_operational": 4, 00:11:52.938 "base_bdevs_list": [ 00:11:52.938 { 00:11:52.938 "name": "BaseBdev1", 00:11:52.938 "uuid": "5398dd8a-1f44-4e3a-8526-c3f99b4e5afe", 00:11:52.938 "is_configured": true, 00:11:52.938 "data_offset": 2048, 00:11:52.938 "data_size": 63488 00:11:52.938 }, 00:11:52.938 { 00:11:52.938 "name": "BaseBdev2", 00:11:52.938 "uuid": "73031e85-d3da-486b-a137-f0a7c3549462", 00:11:52.938 "is_configured": true, 00:11:52.938 "data_offset": 2048, 00:11:52.938 "data_size": 63488 00:11:52.938 }, 00:11:52.938 { 00:11:52.938 "name": "BaseBdev3", 00:11:52.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.938 "is_configured": false, 00:11:52.938 "data_offset": 0, 00:11:52.938 "data_size": 0 00:11:52.938 }, 00:11:52.938 { 00:11:52.938 "name": "BaseBdev4", 00:11:52.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.938 "is_configured": false, 00:11:52.938 "data_offset": 0, 00:11:52.938 "data_size": 0 00:11:52.938 } 00:11:52.938 ] 00:11:52.938 }' 00:11:52.938 10:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.938 10:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.195 [2024-11-15 10:57:00.075253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:53.195 BaseBdev3 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.195 [ 00:11:53.195 { 00:11:53.195 "name": "BaseBdev3", 00:11:53.195 "aliases": [ 00:11:53.195 "9c985c97-29d3-4af9-9aa8-fc788fc7ad6d" 00:11:53.195 ], 00:11:53.195 "product_name": "Malloc disk", 00:11:53.195 "block_size": 512, 00:11:53.195 "num_blocks": 65536, 00:11:53.195 "uuid": "9c985c97-29d3-4af9-9aa8-fc788fc7ad6d", 00:11:53.195 "assigned_rate_limits": { 00:11:53.195 "rw_ios_per_sec": 0, 00:11:53.195 "rw_mbytes_per_sec": 0, 00:11:53.195 "r_mbytes_per_sec": 0, 00:11:53.195 "w_mbytes_per_sec": 0 00:11:53.195 }, 00:11:53.195 "claimed": true, 00:11:53.195 "claim_type": "exclusive_write", 00:11:53.195 "zoned": false, 00:11:53.195 "supported_io_types": { 00:11:53.195 "read": true, 00:11:53.195 "write": true, 00:11:53.195 "unmap": true, 00:11:53.195 "flush": true, 00:11:53.195 "reset": true, 00:11:53.195 "nvme_admin": false, 00:11:53.195 "nvme_io": false, 00:11:53.195 "nvme_io_md": false, 00:11:53.195 "write_zeroes": true, 00:11:53.195 "zcopy": true, 00:11:53.195 "get_zone_info": false, 00:11:53.195 "zone_management": false, 00:11:53.195 "zone_append": false, 00:11:53.195 "compare": false, 00:11:53.195 "compare_and_write": false, 00:11:53.195 "abort": true, 00:11:53.195 "seek_hole": false, 00:11:53.195 "seek_data": false, 00:11:53.195 "copy": true, 00:11:53.195 "nvme_iov_md": false 00:11:53.195 }, 00:11:53.195 "memory_domains": [ 00:11:53.195 { 00:11:53.195 "dma_device_id": "system", 00:11:53.195 "dma_device_type": 1 00:11:53.195 }, 00:11:53.195 { 00:11:53.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.195 "dma_device_type": 2 00:11:53.195 } 00:11:53.195 ], 00:11:53.195 "driver_specific": {} 00:11:53.195 } 00:11:53.195 ] 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.195 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.453 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.453 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.453 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.453 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.453 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.453 "name": "Existed_Raid", 00:11:53.453 "uuid": "82b9d126-92ec-454e-907b-0ef7afbf0a70", 00:11:53.453 "strip_size_kb": 0, 00:11:53.453 "state": "configuring", 00:11:53.453 "raid_level": "raid1", 00:11:53.453 "superblock": true, 00:11:53.453 "num_base_bdevs": 4, 00:11:53.453 "num_base_bdevs_discovered": 3, 00:11:53.453 "num_base_bdevs_operational": 4, 00:11:53.453 "base_bdevs_list": [ 00:11:53.453 { 00:11:53.453 "name": "BaseBdev1", 00:11:53.453 "uuid": "5398dd8a-1f44-4e3a-8526-c3f99b4e5afe", 00:11:53.453 "is_configured": true, 00:11:53.453 "data_offset": 2048, 00:11:53.453 "data_size": 63488 00:11:53.453 }, 00:11:53.453 { 00:11:53.453 "name": "BaseBdev2", 00:11:53.453 "uuid": "73031e85-d3da-486b-a137-f0a7c3549462", 00:11:53.453 "is_configured": true, 00:11:53.453 "data_offset": 2048, 00:11:53.453 "data_size": 63488 00:11:53.453 }, 00:11:53.453 { 00:11:53.453 "name": "BaseBdev3", 00:11:53.453 "uuid": "9c985c97-29d3-4af9-9aa8-fc788fc7ad6d", 00:11:53.453 "is_configured": true, 00:11:53.453 "data_offset": 2048, 00:11:53.453 "data_size": 63488 00:11:53.453 }, 00:11:53.453 { 00:11:53.453 "name": "BaseBdev4", 00:11:53.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.453 "is_configured": false, 00:11:53.453 "data_offset": 0, 00:11:53.453 "data_size": 0 00:11:53.453 } 00:11:53.453 ] 00:11:53.453 }' 00:11:53.453 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.453 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.710 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:53.710 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.710 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.710 [2024-11-15 10:57:00.592426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:53.710 [2024-11-15 10:57:00.592826] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:53.710 [2024-11-15 10:57:00.592849] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:53.710 [2024-11-15 10:57:00.593168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:53.710 [2024-11-15 10:57:00.593352] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:53.710 [2024-11-15 10:57:00.593369] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:53.710 BaseBdev4 00:11:53.710 [2024-11-15 10:57:00.593534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.710 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.710 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:53.710 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:53.710 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:53.710 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:53.710 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:53.710 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:53.710 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:53.710 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.710 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.710 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.710 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:53.710 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.710 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.710 [ 00:11:53.710 { 00:11:53.710 "name": "BaseBdev4", 00:11:53.710 "aliases": [ 00:11:53.710 "80b4d596-716c-4fbd-ae3e-c8ac839441b5" 00:11:53.710 ], 00:11:53.710 "product_name": "Malloc disk", 00:11:53.710 "block_size": 512, 00:11:53.710 "num_blocks": 65536, 00:11:53.710 "uuid": "80b4d596-716c-4fbd-ae3e-c8ac839441b5", 00:11:53.710 "assigned_rate_limits": { 00:11:53.710 "rw_ios_per_sec": 0, 00:11:53.710 "rw_mbytes_per_sec": 0, 00:11:53.710 "r_mbytes_per_sec": 0, 00:11:53.710 "w_mbytes_per_sec": 0 00:11:53.710 }, 00:11:53.710 "claimed": true, 00:11:53.710 "claim_type": "exclusive_write", 00:11:53.710 "zoned": false, 00:11:53.710 "supported_io_types": { 00:11:53.710 "read": true, 00:11:53.710 "write": true, 00:11:53.710 "unmap": true, 00:11:53.710 "flush": true, 00:11:53.710 "reset": true, 00:11:53.710 "nvme_admin": false, 00:11:53.710 "nvme_io": false, 00:11:53.710 "nvme_io_md": false, 00:11:53.710 "write_zeroes": true, 00:11:53.710 "zcopy": true, 00:11:53.710 "get_zone_info": false, 00:11:53.710 "zone_management": false, 00:11:53.710 "zone_append": false, 00:11:53.710 "compare": false, 00:11:53.710 "compare_and_write": false, 00:11:53.710 "abort": true, 00:11:53.710 "seek_hole": false, 00:11:53.710 "seek_data": false, 00:11:53.710 "copy": true, 00:11:53.710 "nvme_iov_md": false 00:11:53.710 }, 00:11:53.710 "memory_domains": [ 00:11:53.710 { 00:11:53.710 "dma_device_id": "system", 00:11:53.710 "dma_device_type": 1 00:11:53.710 }, 00:11:53.710 { 00:11:53.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.710 "dma_device_type": 2 00:11:53.710 } 00:11:53.710 ], 00:11:53.710 "driver_specific": {} 00:11:53.710 } 00:11:53.710 ] 00:11:53.710 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.710 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:53.710 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:53.710 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:53.710 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:53.711 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.711 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.711 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.711 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.711 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.711 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.711 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.711 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.711 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.969 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.969 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.969 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.969 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.969 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.969 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.969 "name": "Existed_Raid", 00:11:53.969 "uuid": "82b9d126-92ec-454e-907b-0ef7afbf0a70", 00:11:53.969 "strip_size_kb": 0, 00:11:53.969 "state": "online", 00:11:53.969 "raid_level": "raid1", 00:11:53.969 "superblock": true, 00:11:53.969 "num_base_bdevs": 4, 00:11:53.969 "num_base_bdevs_discovered": 4, 00:11:53.969 "num_base_bdevs_operational": 4, 00:11:53.969 "base_bdevs_list": [ 00:11:53.969 { 00:11:53.969 "name": "BaseBdev1", 00:11:53.969 "uuid": "5398dd8a-1f44-4e3a-8526-c3f99b4e5afe", 00:11:53.969 "is_configured": true, 00:11:53.969 "data_offset": 2048, 00:11:53.969 "data_size": 63488 00:11:53.969 }, 00:11:53.969 { 00:11:53.969 "name": "BaseBdev2", 00:11:53.969 "uuid": "73031e85-d3da-486b-a137-f0a7c3549462", 00:11:53.969 "is_configured": true, 00:11:53.969 "data_offset": 2048, 00:11:53.969 "data_size": 63488 00:11:53.969 }, 00:11:53.969 { 00:11:53.969 "name": "BaseBdev3", 00:11:53.969 "uuid": "9c985c97-29d3-4af9-9aa8-fc788fc7ad6d", 00:11:53.969 "is_configured": true, 00:11:53.969 "data_offset": 2048, 00:11:53.969 "data_size": 63488 00:11:53.969 }, 00:11:53.969 { 00:11:53.969 "name": "BaseBdev4", 00:11:53.969 "uuid": "80b4d596-716c-4fbd-ae3e-c8ac839441b5", 00:11:53.969 "is_configured": true, 00:11:53.969 "data_offset": 2048, 00:11:53.969 "data_size": 63488 00:11:53.969 } 00:11:53.969 ] 00:11:53.969 }' 00:11:53.969 10:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.969 10:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.227 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:54.227 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:54.227 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:54.227 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:54.227 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:54.227 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:54.227 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:54.227 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:54.227 10:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.227 10:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.227 [2024-11-15 10:57:01.072095] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:54.227 10:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.227 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:54.227 "name": "Existed_Raid", 00:11:54.227 "aliases": [ 00:11:54.227 "82b9d126-92ec-454e-907b-0ef7afbf0a70" 00:11:54.227 ], 00:11:54.227 "product_name": "Raid Volume", 00:11:54.227 "block_size": 512, 00:11:54.227 "num_blocks": 63488, 00:11:54.227 "uuid": "82b9d126-92ec-454e-907b-0ef7afbf0a70", 00:11:54.227 "assigned_rate_limits": { 00:11:54.227 "rw_ios_per_sec": 0, 00:11:54.227 "rw_mbytes_per_sec": 0, 00:11:54.227 "r_mbytes_per_sec": 0, 00:11:54.227 "w_mbytes_per_sec": 0 00:11:54.227 }, 00:11:54.227 "claimed": false, 00:11:54.227 "zoned": false, 00:11:54.227 "supported_io_types": { 00:11:54.227 "read": true, 00:11:54.227 "write": true, 00:11:54.227 "unmap": false, 00:11:54.227 "flush": false, 00:11:54.227 "reset": true, 00:11:54.227 "nvme_admin": false, 00:11:54.227 "nvme_io": false, 00:11:54.227 "nvme_io_md": false, 00:11:54.227 "write_zeroes": true, 00:11:54.227 "zcopy": false, 00:11:54.227 "get_zone_info": false, 00:11:54.227 "zone_management": false, 00:11:54.227 "zone_append": false, 00:11:54.227 "compare": false, 00:11:54.227 "compare_and_write": false, 00:11:54.227 "abort": false, 00:11:54.227 "seek_hole": false, 00:11:54.227 "seek_data": false, 00:11:54.227 "copy": false, 00:11:54.227 "nvme_iov_md": false 00:11:54.227 }, 00:11:54.227 "memory_domains": [ 00:11:54.227 { 00:11:54.227 "dma_device_id": "system", 00:11:54.227 "dma_device_type": 1 00:11:54.227 }, 00:11:54.227 { 00:11:54.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.227 "dma_device_type": 2 00:11:54.227 }, 00:11:54.227 { 00:11:54.227 "dma_device_id": "system", 00:11:54.227 "dma_device_type": 1 00:11:54.227 }, 00:11:54.227 { 00:11:54.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.227 "dma_device_type": 2 00:11:54.227 }, 00:11:54.227 { 00:11:54.227 "dma_device_id": "system", 00:11:54.227 "dma_device_type": 1 00:11:54.227 }, 00:11:54.227 { 00:11:54.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.227 "dma_device_type": 2 00:11:54.227 }, 00:11:54.227 { 00:11:54.227 "dma_device_id": "system", 00:11:54.227 "dma_device_type": 1 00:11:54.227 }, 00:11:54.227 { 00:11:54.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.227 "dma_device_type": 2 00:11:54.227 } 00:11:54.227 ], 00:11:54.227 "driver_specific": { 00:11:54.227 "raid": { 00:11:54.227 "uuid": "82b9d126-92ec-454e-907b-0ef7afbf0a70", 00:11:54.227 "strip_size_kb": 0, 00:11:54.227 "state": "online", 00:11:54.227 "raid_level": "raid1", 00:11:54.227 "superblock": true, 00:11:54.227 "num_base_bdevs": 4, 00:11:54.227 "num_base_bdevs_discovered": 4, 00:11:54.227 "num_base_bdevs_operational": 4, 00:11:54.227 "base_bdevs_list": [ 00:11:54.227 { 00:11:54.227 "name": "BaseBdev1", 00:11:54.227 "uuid": "5398dd8a-1f44-4e3a-8526-c3f99b4e5afe", 00:11:54.227 "is_configured": true, 00:11:54.227 "data_offset": 2048, 00:11:54.227 "data_size": 63488 00:11:54.227 }, 00:11:54.227 { 00:11:54.227 "name": "BaseBdev2", 00:11:54.227 "uuid": "73031e85-d3da-486b-a137-f0a7c3549462", 00:11:54.227 "is_configured": true, 00:11:54.227 "data_offset": 2048, 00:11:54.227 "data_size": 63488 00:11:54.227 }, 00:11:54.227 { 00:11:54.227 "name": "BaseBdev3", 00:11:54.227 "uuid": "9c985c97-29d3-4af9-9aa8-fc788fc7ad6d", 00:11:54.227 "is_configured": true, 00:11:54.227 "data_offset": 2048, 00:11:54.227 "data_size": 63488 00:11:54.227 }, 00:11:54.227 { 00:11:54.227 "name": "BaseBdev4", 00:11:54.227 "uuid": "80b4d596-716c-4fbd-ae3e-c8ac839441b5", 00:11:54.227 "is_configured": true, 00:11:54.227 "data_offset": 2048, 00:11:54.227 "data_size": 63488 00:11:54.227 } 00:11:54.227 ] 00:11:54.227 } 00:11:54.227 } 00:11:54.227 }' 00:11:54.227 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:54.485 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:54.485 BaseBdev2 00:11:54.485 BaseBdev3 00:11:54.485 BaseBdev4' 00:11:54.485 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.485 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:54.485 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.485 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:54.485 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.485 10:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.485 10:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.485 10:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.485 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.485 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.485 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.485 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:54.485 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.485 10:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.485 10:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.485 10:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.485 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.485 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.485 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.485 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:54.485 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.486 10:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.486 10:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.486 10:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.486 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.486 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.486 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.486 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:54.486 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.486 10:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.486 10:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.486 10:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.486 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.744 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.744 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:54.744 10:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.744 10:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.744 [2024-11-15 10:57:01.415214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:54.744 10:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.744 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:54.744 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:54.744 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:54.744 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:54.744 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:54.744 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:54.744 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.744 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.744 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.744 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.744 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:54.744 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.744 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.744 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.744 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.744 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.744 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.744 10:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.744 10:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.744 10:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.744 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.744 "name": "Existed_Raid", 00:11:54.744 "uuid": "82b9d126-92ec-454e-907b-0ef7afbf0a70", 00:11:54.744 "strip_size_kb": 0, 00:11:54.744 "state": "online", 00:11:54.744 "raid_level": "raid1", 00:11:54.744 "superblock": true, 00:11:54.744 "num_base_bdevs": 4, 00:11:54.744 "num_base_bdevs_discovered": 3, 00:11:54.744 "num_base_bdevs_operational": 3, 00:11:54.744 "base_bdevs_list": [ 00:11:54.744 { 00:11:54.744 "name": null, 00:11:54.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.744 "is_configured": false, 00:11:54.744 "data_offset": 0, 00:11:54.744 "data_size": 63488 00:11:54.744 }, 00:11:54.744 { 00:11:54.744 "name": "BaseBdev2", 00:11:54.744 "uuid": "73031e85-d3da-486b-a137-f0a7c3549462", 00:11:54.744 "is_configured": true, 00:11:54.744 "data_offset": 2048, 00:11:54.744 "data_size": 63488 00:11:54.744 }, 00:11:54.744 { 00:11:54.744 "name": "BaseBdev3", 00:11:54.744 "uuid": "9c985c97-29d3-4af9-9aa8-fc788fc7ad6d", 00:11:54.744 "is_configured": true, 00:11:54.744 "data_offset": 2048, 00:11:54.744 "data_size": 63488 00:11:54.744 }, 00:11:54.744 { 00:11:54.744 "name": "BaseBdev4", 00:11:54.744 "uuid": "80b4d596-716c-4fbd-ae3e-c8ac839441b5", 00:11:54.744 "is_configured": true, 00:11:54.744 "data_offset": 2048, 00:11:54.744 "data_size": 63488 00:11:54.744 } 00:11:54.744 ] 00:11:54.744 }' 00:11:54.744 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.744 10:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.312 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:55.312 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:55.312 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.312 10:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.312 10:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.312 10:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:55.312 10:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.312 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:55.312 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:55.312 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:55.312 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.312 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.312 [2024-11-15 10:57:02.017994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:55.312 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.312 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:55.312 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:55.312 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.312 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:55.312 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.312 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.312 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.312 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:55.312 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:55.312 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:55.312 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.312 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.312 [2024-11-15 10:57:02.186213] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:55.569 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.569 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:55.569 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:55.569 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.569 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.569 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:55.569 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.569 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.569 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:55.569 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:55.569 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:55.569 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.569 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.569 [2024-11-15 10:57:02.351097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:55.569 [2024-11-15 10:57:02.351259] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:55.569 [2024-11-15 10:57:02.447359] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:55.569 [2024-11-15 10:57:02.447499] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:55.570 [2024-11-15 10:57:02.447544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:55.570 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.570 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:55.570 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:55.570 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.570 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.570 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:55.570 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.570 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.827 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:55.827 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:55.827 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:55.827 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:55.827 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:55.827 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:55.827 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.828 BaseBdev2 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.828 [ 00:11:55.828 { 00:11:55.828 "name": "BaseBdev2", 00:11:55.828 "aliases": [ 00:11:55.828 "893de2e8-df37-409c-9ef7-e82976b669ae" 00:11:55.828 ], 00:11:55.828 "product_name": "Malloc disk", 00:11:55.828 "block_size": 512, 00:11:55.828 "num_blocks": 65536, 00:11:55.828 "uuid": "893de2e8-df37-409c-9ef7-e82976b669ae", 00:11:55.828 "assigned_rate_limits": { 00:11:55.828 "rw_ios_per_sec": 0, 00:11:55.828 "rw_mbytes_per_sec": 0, 00:11:55.828 "r_mbytes_per_sec": 0, 00:11:55.828 "w_mbytes_per_sec": 0 00:11:55.828 }, 00:11:55.828 "claimed": false, 00:11:55.828 "zoned": false, 00:11:55.828 "supported_io_types": { 00:11:55.828 "read": true, 00:11:55.828 "write": true, 00:11:55.828 "unmap": true, 00:11:55.828 "flush": true, 00:11:55.828 "reset": true, 00:11:55.828 "nvme_admin": false, 00:11:55.828 "nvme_io": false, 00:11:55.828 "nvme_io_md": false, 00:11:55.828 "write_zeroes": true, 00:11:55.828 "zcopy": true, 00:11:55.828 "get_zone_info": false, 00:11:55.828 "zone_management": false, 00:11:55.828 "zone_append": false, 00:11:55.828 "compare": false, 00:11:55.828 "compare_and_write": false, 00:11:55.828 "abort": true, 00:11:55.828 "seek_hole": false, 00:11:55.828 "seek_data": false, 00:11:55.828 "copy": true, 00:11:55.828 "nvme_iov_md": false 00:11:55.828 }, 00:11:55.828 "memory_domains": [ 00:11:55.828 { 00:11:55.828 "dma_device_id": "system", 00:11:55.828 "dma_device_type": 1 00:11:55.828 }, 00:11:55.828 { 00:11:55.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.828 "dma_device_type": 2 00:11:55.828 } 00:11:55.828 ], 00:11:55.828 "driver_specific": {} 00:11:55.828 } 00:11:55.828 ] 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.828 BaseBdev3 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.828 [ 00:11:55.828 { 00:11:55.828 "name": "BaseBdev3", 00:11:55.828 "aliases": [ 00:11:55.828 "fa7882ed-de1a-498e-bdaf-48452de9486f" 00:11:55.828 ], 00:11:55.828 "product_name": "Malloc disk", 00:11:55.828 "block_size": 512, 00:11:55.828 "num_blocks": 65536, 00:11:55.828 "uuid": "fa7882ed-de1a-498e-bdaf-48452de9486f", 00:11:55.828 "assigned_rate_limits": { 00:11:55.828 "rw_ios_per_sec": 0, 00:11:55.828 "rw_mbytes_per_sec": 0, 00:11:55.828 "r_mbytes_per_sec": 0, 00:11:55.828 "w_mbytes_per_sec": 0 00:11:55.828 }, 00:11:55.828 "claimed": false, 00:11:55.828 "zoned": false, 00:11:55.828 "supported_io_types": { 00:11:55.828 "read": true, 00:11:55.828 "write": true, 00:11:55.828 "unmap": true, 00:11:55.828 "flush": true, 00:11:55.828 "reset": true, 00:11:55.828 "nvme_admin": false, 00:11:55.828 "nvme_io": false, 00:11:55.828 "nvme_io_md": false, 00:11:55.828 "write_zeroes": true, 00:11:55.828 "zcopy": true, 00:11:55.828 "get_zone_info": false, 00:11:55.828 "zone_management": false, 00:11:55.828 "zone_append": false, 00:11:55.828 "compare": false, 00:11:55.828 "compare_and_write": false, 00:11:55.828 "abort": true, 00:11:55.828 "seek_hole": false, 00:11:55.828 "seek_data": false, 00:11:55.828 "copy": true, 00:11:55.828 "nvme_iov_md": false 00:11:55.828 }, 00:11:55.828 "memory_domains": [ 00:11:55.828 { 00:11:55.828 "dma_device_id": "system", 00:11:55.828 "dma_device_type": 1 00:11:55.828 }, 00:11:55.828 { 00:11:55.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.828 "dma_device_type": 2 00:11:55.828 } 00:11:55.828 ], 00:11:55.828 "driver_specific": {} 00:11:55.828 } 00:11:55.828 ] 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.828 BaseBdev4 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.828 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.828 [ 00:11:55.828 { 00:11:55.828 "name": "BaseBdev4", 00:11:55.828 "aliases": [ 00:11:55.828 "9f88402e-72bf-4f01-ae0b-3237269e6a0f" 00:11:55.828 ], 00:11:55.828 "product_name": "Malloc disk", 00:11:55.828 "block_size": 512, 00:11:55.828 "num_blocks": 65536, 00:11:55.828 "uuid": "9f88402e-72bf-4f01-ae0b-3237269e6a0f", 00:11:55.828 "assigned_rate_limits": { 00:11:55.828 "rw_ios_per_sec": 0, 00:11:55.828 "rw_mbytes_per_sec": 0, 00:11:55.828 "r_mbytes_per_sec": 0, 00:11:55.828 "w_mbytes_per_sec": 0 00:11:55.828 }, 00:11:55.828 "claimed": false, 00:11:55.828 "zoned": false, 00:11:55.828 "supported_io_types": { 00:11:55.828 "read": true, 00:11:55.828 "write": true, 00:11:55.828 "unmap": true, 00:11:55.828 "flush": true, 00:11:55.828 "reset": true, 00:11:55.828 "nvme_admin": false, 00:11:55.828 "nvme_io": false, 00:11:55.828 "nvme_io_md": false, 00:11:55.828 "write_zeroes": true, 00:11:55.828 "zcopy": true, 00:11:55.828 "get_zone_info": false, 00:11:55.828 "zone_management": false, 00:11:55.828 "zone_append": false, 00:11:55.828 "compare": false, 00:11:55.828 "compare_and_write": false, 00:11:55.828 "abort": true, 00:11:55.828 "seek_hole": false, 00:11:55.829 "seek_data": false, 00:11:55.829 "copy": true, 00:11:55.829 "nvme_iov_md": false 00:11:55.829 }, 00:11:55.829 "memory_domains": [ 00:11:55.829 { 00:11:55.829 "dma_device_id": "system", 00:11:55.829 "dma_device_type": 1 00:11:55.829 }, 00:11:55.829 { 00:11:56.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.087 "dma_device_type": 2 00:11:56.087 } 00:11:56.087 ], 00:11:56.087 "driver_specific": {} 00:11:56.087 } 00:11:56.087 ] 00:11:56.087 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.087 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:56.087 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:56.087 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:56.087 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:56.087 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.087 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.087 [2024-11-15 10:57:02.761283] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:56.087 [2024-11-15 10:57:02.761385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:56.087 [2024-11-15 10:57:02.761440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:56.087 [2024-11-15 10:57:02.763193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:56.087 [2024-11-15 10:57:02.763276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:56.087 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.087 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:56.087 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.087 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.087 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.087 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.087 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.087 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.087 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.087 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.087 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.087 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.087 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.087 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.087 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.087 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.087 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.087 "name": "Existed_Raid", 00:11:56.087 "uuid": "bba0beac-31df-4059-9f9e-d1a38770e2ed", 00:11:56.087 "strip_size_kb": 0, 00:11:56.087 "state": "configuring", 00:11:56.087 "raid_level": "raid1", 00:11:56.087 "superblock": true, 00:11:56.087 "num_base_bdevs": 4, 00:11:56.087 "num_base_bdevs_discovered": 3, 00:11:56.087 "num_base_bdevs_operational": 4, 00:11:56.087 "base_bdevs_list": [ 00:11:56.087 { 00:11:56.087 "name": "BaseBdev1", 00:11:56.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.087 "is_configured": false, 00:11:56.087 "data_offset": 0, 00:11:56.087 "data_size": 0 00:11:56.087 }, 00:11:56.087 { 00:11:56.087 "name": "BaseBdev2", 00:11:56.087 "uuid": "893de2e8-df37-409c-9ef7-e82976b669ae", 00:11:56.087 "is_configured": true, 00:11:56.087 "data_offset": 2048, 00:11:56.087 "data_size": 63488 00:11:56.087 }, 00:11:56.087 { 00:11:56.087 "name": "BaseBdev3", 00:11:56.087 "uuid": "fa7882ed-de1a-498e-bdaf-48452de9486f", 00:11:56.087 "is_configured": true, 00:11:56.087 "data_offset": 2048, 00:11:56.088 "data_size": 63488 00:11:56.088 }, 00:11:56.088 { 00:11:56.088 "name": "BaseBdev4", 00:11:56.088 "uuid": "9f88402e-72bf-4f01-ae0b-3237269e6a0f", 00:11:56.088 "is_configured": true, 00:11:56.088 "data_offset": 2048, 00:11:56.088 "data_size": 63488 00:11:56.088 } 00:11:56.088 ] 00:11:56.088 }' 00:11:56.088 10:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.088 10:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.346 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:56.346 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.346 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.346 [2024-11-15 10:57:03.216533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:56.346 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.346 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:56.346 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.346 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.346 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.346 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.346 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.346 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.346 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.346 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.346 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.346 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.346 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.346 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.346 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.346 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.346 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.346 "name": "Existed_Raid", 00:11:56.346 "uuid": "bba0beac-31df-4059-9f9e-d1a38770e2ed", 00:11:56.346 "strip_size_kb": 0, 00:11:56.346 "state": "configuring", 00:11:56.346 "raid_level": "raid1", 00:11:56.346 "superblock": true, 00:11:56.346 "num_base_bdevs": 4, 00:11:56.346 "num_base_bdevs_discovered": 2, 00:11:56.346 "num_base_bdevs_operational": 4, 00:11:56.346 "base_bdevs_list": [ 00:11:56.346 { 00:11:56.346 "name": "BaseBdev1", 00:11:56.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.346 "is_configured": false, 00:11:56.346 "data_offset": 0, 00:11:56.346 "data_size": 0 00:11:56.346 }, 00:11:56.346 { 00:11:56.346 "name": null, 00:11:56.346 "uuid": "893de2e8-df37-409c-9ef7-e82976b669ae", 00:11:56.346 "is_configured": false, 00:11:56.346 "data_offset": 0, 00:11:56.346 "data_size": 63488 00:11:56.346 }, 00:11:56.346 { 00:11:56.346 "name": "BaseBdev3", 00:11:56.346 "uuid": "fa7882ed-de1a-498e-bdaf-48452de9486f", 00:11:56.346 "is_configured": true, 00:11:56.346 "data_offset": 2048, 00:11:56.346 "data_size": 63488 00:11:56.346 }, 00:11:56.346 { 00:11:56.346 "name": "BaseBdev4", 00:11:56.346 "uuid": "9f88402e-72bf-4f01-ae0b-3237269e6a0f", 00:11:56.346 "is_configured": true, 00:11:56.346 "data_offset": 2048, 00:11:56.346 "data_size": 63488 00:11:56.346 } 00:11:56.346 ] 00:11:56.346 }' 00:11:56.346 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.346 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.913 [2024-11-15 10:57:03.738001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:56.913 BaseBdev1 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.913 [ 00:11:56.913 { 00:11:56.913 "name": "BaseBdev1", 00:11:56.913 "aliases": [ 00:11:56.913 "85c7bc3d-1b34-433a-83a5-ad17a15419c9" 00:11:56.913 ], 00:11:56.913 "product_name": "Malloc disk", 00:11:56.913 "block_size": 512, 00:11:56.913 "num_blocks": 65536, 00:11:56.913 "uuid": "85c7bc3d-1b34-433a-83a5-ad17a15419c9", 00:11:56.913 "assigned_rate_limits": { 00:11:56.913 "rw_ios_per_sec": 0, 00:11:56.913 "rw_mbytes_per_sec": 0, 00:11:56.913 "r_mbytes_per_sec": 0, 00:11:56.913 "w_mbytes_per_sec": 0 00:11:56.913 }, 00:11:56.913 "claimed": true, 00:11:56.913 "claim_type": "exclusive_write", 00:11:56.913 "zoned": false, 00:11:56.913 "supported_io_types": { 00:11:56.913 "read": true, 00:11:56.913 "write": true, 00:11:56.913 "unmap": true, 00:11:56.913 "flush": true, 00:11:56.913 "reset": true, 00:11:56.913 "nvme_admin": false, 00:11:56.913 "nvme_io": false, 00:11:56.913 "nvme_io_md": false, 00:11:56.913 "write_zeroes": true, 00:11:56.913 "zcopy": true, 00:11:56.913 "get_zone_info": false, 00:11:56.913 "zone_management": false, 00:11:56.913 "zone_append": false, 00:11:56.913 "compare": false, 00:11:56.913 "compare_and_write": false, 00:11:56.913 "abort": true, 00:11:56.913 "seek_hole": false, 00:11:56.913 "seek_data": false, 00:11:56.913 "copy": true, 00:11:56.913 "nvme_iov_md": false 00:11:56.913 }, 00:11:56.913 "memory_domains": [ 00:11:56.913 { 00:11:56.913 "dma_device_id": "system", 00:11:56.913 "dma_device_type": 1 00:11:56.913 }, 00:11:56.913 { 00:11:56.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.913 "dma_device_type": 2 00:11:56.913 } 00:11:56.913 ], 00:11:56.913 "driver_specific": {} 00:11:56.913 } 00:11:56.913 ] 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.913 "name": "Existed_Raid", 00:11:56.913 "uuid": "bba0beac-31df-4059-9f9e-d1a38770e2ed", 00:11:56.913 "strip_size_kb": 0, 00:11:56.913 "state": "configuring", 00:11:56.913 "raid_level": "raid1", 00:11:56.913 "superblock": true, 00:11:56.913 "num_base_bdevs": 4, 00:11:56.913 "num_base_bdevs_discovered": 3, 00:11:56.913 "num_base_bdevs_operational": 4, 00:11:56.913 "base_bdevs_list": [ 00:11:56.913 { 00:11:56.913 "name": "BaseBdev1", 00:11:56.913 "uuid": "85c7bc3d-1b34-433a-83a5-ad17a15419c9", 00:11:56.913 "is_configured": true, 00:11:56.913 "data_offset": 2048, 00:11:56.913 "data_size": 63488 00:11:56.913 }, 00:11:56.913 { 00:11:56.913 "name": null, 00:11:56.913 "uuid": "893de2e8-df37-409c-9ef7-e82976b669ae", 00:11:56.913 "is_configured": false, 00:11:56.913 "data_offset": 0, 00:11:56.913 "data_size": 63488 00:11:56.913 }, 00:11:56.913 { 00:11:56.913 "name": "BaseBdev3", 00:11:56.913 "uuid": "fa7882ed-de1a-498e-bdaf-48452de9486f", 00:11:56.913 "is_configured": true, 00:11:56.913 "data_offset": 2048, 00:11:56.913 "data_size": 63488 00:11:56.913 }, 00:11:56.913 { 00:11:56.913 "name": "BaseBdev4", 00:11:56.913 "uuid": "9f88402e-72bf-4f01-ae0b-3237269e6a0f", 00:11:56.913 "is_configured": true, 00:11:56.913 "data_offset": 2048, 00:11:56.913 "data_size": 63488 00:11:56.913 } 00:11:56.913 ] 00:11:56.913 }' 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.913 10:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.479 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.479 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:57.479 10:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.479 10:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.479 10:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.479 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:57.479 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:57.479 10:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.479 10:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.479 [2024-11-15 10:57:04.273215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:57.479 10:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.479 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:57.479 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.479 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.479 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.479 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.479 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.480 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.480 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.480 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.480 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.480 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.480 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.480 10:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.480 10:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.480 10:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.480 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.480 "name": "Existed_Raid", 00:11:57.480 "uuid": "bba0beac-31df-4059-9f9e-d1a38770e2ed", 00:11:57.480 "strip_size_kb": 0, 00:11:57.480 "state": "configuring", 00:11:57.480 "raid_level": "raid1", 00:11:57.480 "superblock": true, 00:11:57.480 "num_base_bdevs": 4, 00:11:57.480 "num_base_bdevs_discovered": 2, 00:11:57.480 "num_base_bdevs_operational": 4, 00:11:57.480 "base_bdevs_list": [ 00:11:57.480 { 00:11:57.480 "name": "BaseBdev1", 00:11:57.480 "uuid": "85c7bc3d-1b34-433a-83a5-ad17a15419c9", 00:11:57.480 "is_configured": true, 00:11:57.480 "data_offset": 2048, 00:11:57.480 "data_size": 63488 00:11:57.480 }, 00:11:57.480 { 00:11:57.480 "name": null, 00:11:57.480 "uuid": "893de2e8-df37-409c-9ef7-e82976b669ae", 00:11:57.480 "is_configured": false, 00:11:57.480 "data_offset": 0, 00:11:57.480 "data_size": 63488 00:11:57.480 }, 00:11:57.480 { 00:11:57.480 "name": null, 00:11:57.480 "uuid": "fa7882ed-de1a-498e-bdaf-48452de9486f", 00:11:57.480 "is_configured": false, 00:11:57.480 "data_offset": 0, 00:11:57.480 "data_size": 63488 00:11:57.480 }, 00:11:57.480 { 00:11:57.480 "name": "BaseBdev4", 00:11:57.480 "uuid": "9f88402e-72bf-4f01-ae0b-3237269e6a0f", 00:11:57.480 "is_configured": true, 00:11:57.480 "data_offset": 2048, 00:11:57.480 "data_size": 63488 00:11:57.480 } 00:11:57.480 ] 00:11:57.480 }' 00:11:57.480 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.480 10:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.046 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:58.046 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.046 10:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.046 10:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.046 10:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.046 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:58.046 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:58.046 10:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.046 10:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.046 [2024-11-15 10:57:04.752404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:58.046 10:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.046 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:58.046 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.046 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.046 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.046 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.046 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.046 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.046 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.046 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.046 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.046 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.046 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.046 10:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.046 10:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.046 10:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.047 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.047 "name": "Existed_Raid", 00:11:58.047 "uuid": "bba0beac-31df-4059-9f9e-d1a38770e2ed", 00:11:58.047 "strip_size_kb": 0, 00:11:58.047 "state": "configuring", 00:11:58.047 "raid_level": "raid1", 00:11:58.047 "superblock": true, 00:11:58.047 "num_base_bdevs": 4, 00:11:58.047 "num_base_bdevs_discovered": 3, 00:11:58.047 "num_base_bdevs_operational": 4, 00:11:58.047 "base_bdevs_list": [ 00:11:58.047 { 00:11:58.047 "name": "BaseBdev1", 00:11:58.047 "uuid": "85c7bc3d-1b34-433a-83a5-ad17a15419c9", 00:11:58.047 "is_configured": true, 00:11:58.047 "data_offset": 2048, 00:11:58.047 "data_size": 63488 00:11:58.047 }, 00:11:58.047 { 00:11:58.047 "name": null, 00:11:58.047 "uuid": "893de2e8-df37-409c-9ef7-e82976b669ae", 00:11:58.047 "is_configured": false, 00:11:58.047 "data_offset": 0, 00:11:58.047 "data_size": 63488 00:11:58.047 }, 00:11:58.047 { 00:11:58.047 "name": "BaseBdev3", 00:11:58.047 "uuid": "fa7882ed-de1a-498e-bdaf-48452de9486f", 00:11:58.047 "is_configured": true, 00:11:58.047 "data_offset": 2048, 00:11:58.047 "data_size": 63488 00:11:58.047 }, 00:11:58.047 { 00:11:58.047 "name": "BaseBdev4", 00:11:58.047 "uuid": "9f88402e-72bf-4f01-ae0b-3237269e6a0f", 00:11:58.047 "is_configured": true, 00:11:58.047 "data_offset": 2048, 00:11:58.047 "data_size": 63488 00:11:58.047 } 00:11:58.047 ] 00:11:58.047 }' 00:11:58.047 10:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.047 10:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.305 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.305 10:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.305 10:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.305 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:58.305 10:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.563 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:58.563 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:58.563 10:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.563 10:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.563 [2024-11-15 10:57:05.235678] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:58.563 10:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.563 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:58.563 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.563 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.563 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.563 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.563 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.563 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.563 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.563 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.563 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.563 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.563 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.563 10:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.563 10:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.563 10:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.563 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.563 "name": "Existed_Raid", 00:11:58.563 "uuid": "bba0beac-31df-4059-9f9e-d1a38770e2ed", 00:11:58.563 "strip_size_kb": 0, 00:11:58.563 "state": "configuring", 00:11:58.563 "raid_level": "raid1", 00:11:58.563 "superblock": true, 00:11:58.563 "num_base_bdevs": 4, 00:11:58.563 "num_base_bdevs_discovered": 2, 00:11:58.563 "num_base_bdevs_operational": 4, 00:11:58.563 "base_bdevs_list": [ 00:11:58.563 { 00:11:58.563 "name": null, 00:11:58.563 "uuid": "85c7bc3d-1b34-433a-83a5-ad17a15419c9", 00:11:58.563 "is_configured": false, 00:11:58.563 "data_offset": 0, 00:11:58.563 "data_size": 63488 00:11:58.563 }, 00:11:58.563 { 00:11:58.563 "name": null, 00:11:58.563 "uuid": "893de2e8-df37-409c-9ef7-e82976b669ae", 00:11:58.563 "is_configured": false, 00:11:58.563 "data_offset": 0, 00:11:58.563 "data_size": 63488 00:11:58.563 }, 00:11:58.563 { 00:11:58.563 "name": "BaseBdev3", 00:11:58.563 "uuid": "fa7882ed-de1a-498e-bdaf-48452de9486f", 00:11:58.563 "is_configured": true, 00:11:58.563 "data_offset": 2048, 00:11:58.563 "data_size": 63488 00:11:58.563 }, 00:11:58.563 { 00:11:58.563 "name": "BaseBdev4", 00:11:58.563 "uuid": "9f88402e-72bf-4f01-ae0b-3237269e6a0f", 00:11:58.563 "is_configured": true, 00:11:58.563 "data_offset": 2048, 00:11:58.563 "data_size": 63488 00:11:58.563 } 00:11:58.563 ] 00:11:58.563 }' 00:11:58.563 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.563 10:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.821 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.821 10:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.821 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:58.821 10:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.080 10:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.080 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:59.080 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:59.080 10:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.080 10:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.080 [2024-11-15 10:57:05.793203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:59.080 10:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.080 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:59.080 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.080 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.080 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.080 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.080 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.080 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.080 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.080 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.080 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.080 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.080 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.080 10:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.080 10:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.080 10:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.080 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.080 "name": "Existed_Raid", 00:11:59.080 "uuid": "bba0beac-31df-4059-9f9e-d1a38770e2ed", 00:11:59.080 "strip_size_kb": 0, 00:11:59.080 "state": "configuring", 00:11:59.080 "raid_level": "raid1", 00:11:59.080 "superblock": true, 00:11:59.080 "num_base_bdevs": 4, 00:11:59.080 "num_base_bdevs_discovered": 3, 00:11:59.080 "num_base_bdevs_operational": 4, 00:11:59.080 "base_bdevs_list": [ 00:11:59.080 { 00:11:59.080 "name": null, 00:11:59.080 "uuid": "85c7bc3d-1b34-433a-83a5-ad17a15419c9", 00:11:59.080 "is_configured": false, 00:11:59.080 "data_offset": 0, 00:11:59.080 "data_size": 63488 00:11:59.080 }, 00:11:59.080 { 00:11:59.080 "name": "BaseBdev2", 00:11:59.080 "uuid": "893de2e8-df37-409c-9ef7-e82976b669ae", 00:11:59.080 "is_configured": true, 00:11:59.080 "data_offset": 2048, 00:11:59.080 "data_size": 63488 00:11:59.080 }, 00:11:59.080 { 00:11:59.080 "name": "BaseBdev3", 00:11:59.080 "uuid": "fa7882ed-de1a-498e-bdaf-48452de9486f", 00:11:59.080 "is_configured": true, 00:11:59.080 "data_offset": 2048, 00:11:59.080 "data_size": 63488 00:11:59.080 }, 00:11:59.080 { 00:11:59.080 "name": "BaseBdev4", 00:11:59.080 "uuid": "9f88402e-72bf-4f01-ae0b-3237269e6a0f", 00:11:59.080 "is_configured": true, 00:11:59.080 "data_offset": 2048, 00:11:59.080 "data_size": 63488 00:11:59.080 } 00:11:59.080 ] 00:11:59.080 }' 00:11:59.080 10:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.080 10:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.340 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:59.340 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.340 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.340 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.601 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.601 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:59.601 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:59.601 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.601 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.601 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.601 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.601 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 85c7bc3d-1b34-433a-83a5-ad17a15419c9 00:11:59.601 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.601 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.601 [2024-11-15 10:57:06.385661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:59.601 [2024-11-15 10:57:06.385982] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:59.601 [2024-11-15 10:57:06.386025] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:59.601 [2024-11-15 10:57:06.386357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:59.601 [2024-11-15 10:57:06.386567] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:59.601 [2024-11-15 10:57:06.386614] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raNewBaseBdev 00:11:59.601 id_bdev 0x617000008200 00:11:59.601 [2024-11-15 10:57:06.386827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.601 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.601 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:59.601 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:59.601 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:59.601 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:59.601 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:59.601 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:59.601 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:59.601 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.601 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.601 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.601 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:59.601 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.601 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.601 [ 00:11:59.601 { 00:11:59.601 "name": "NewBaseBdev", 00:11:59.601 "aliases": [ 00:11:59.601 "85c7bc3d-1b34-433a-83a5-ad17a15419c9" 00:11:59.601 ], 00:11:59.601 "product_name": "Malloc disk", 00:11:59.601 "block_size": 512, 00:11:59.601 "num_blocks": 65536, 00:11:59.601 "uuid": "85c7bc3d-1b34-433a-83a5-ad17a15419c9", 00:11:59.601 "assigned_rate_limits": { 00:11:59.601 "rw_ios_per_sec": 0, 00:11:59.601 "rw_mbytes_per_sec": 0, 00:11:59.601 "r_mbytes_per_sec": 0, 00:11:59.601 "w_mbytes_per_sec": 0 00:11:59.601 }, 00:11:59.601 "claimed": true, 00:11:59.601 "claim_type": "exclusive_write", 00:11:59.601 "zoned": false, 00:11:59.601 "supported_io_types": { 00:11:59.601 "read": true, 00:11:59.601 "write": true, 00:11:59.601 "unmap": true, 00:11:59.601 "flush": true, 00:11:59.601 "reset": true, 00:11:59.601 "nvme_admin": false, 00:11:59.601 "nvme_io": false, 00:11:59.601 "nvme_io_md": false, 00:11:59.601 "write_zeroes": true, 00:11:59.601 "zcopy": true, 00:11:59.601 "get_zone_info": false, 00:11:59.601 "zone_management": false, 00:11:59.601 "zone_append": false, 00:11:59.601 "compare": false, 00:11:59.601 "compare_and_write": false, 00:11:59.601 "abort": true, 00:11:59.601 "seek_hole": false, 00:11:59.601 "seek_data": false, 00:11:59.601 "copy": true, 00:11:59.601 "nvme_iov_md": false 00:11:59.601 }, 00:11:59.601 "memory_domains": [ 00:11:59.601 { 00:11:59.601 "dma_device_id": "system", 00:11:59.601 "dma_device_type": 1 00:11:59.601 }, 00:11:59.601 { 00:11:59.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.601 "dma_device_type": 2 00:11:59.601 } 00:11:59.601 ], 00:11:59.601 "driver_specific": {} 00:11:59.601 } 00:11:59.601 ] 00:11:59.601 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.601 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:59.601 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:59.601 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.601 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.602 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.602 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.602 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.602 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.602 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.602 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.602 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.602 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.602 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.602 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.602 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.602 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.602 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.602 "name": "Existed_Raid", 00:11:59.602 "uuid": "bba0beac-31df-4059-9f9e-d1a38770e2ed", 00:11:59.602 "strip_size_kb": 0, 00:11:59.602 "state": "online", 00:11:59.602 "raid_level": "raid1", 00:11:59.602 "superblock": true, 00:11:59.602 "num_base_bdevs": 4, 00:11:59.602 "num_base_bdevs_discovered": 4, 00:11:59.602 "num_base_bdevs_operational": 4, 00:11:59.602 "base_bdevs_list": [ 00:11:59.602 { 00:11:59.602 "name": "NewBaseBdev", 00:11:59.602 "uuid": "85c7bc3d-1b34-433a-83a5-ad17a15419c9", 00:11:59.602 "is_configured": true, 00:11:59.602 "data_offset": 2048, 00:11:59.602 "data_size": 63488 00:11:59.602 }, 00:11:59.602 { 00:11:59.602 "name": "BaseBdev2", 00:11:59.602 "uuid": "893de2e8-df37-409c-9ef7-e82976b669ae", 00:11:59.602 "is_configured": true, 00:11:59.602 "data_offset": 2048, 00:11:59.602 "data_size": 63488 00:11:59.602 }, 00:11:59.602 { 00:11:59.602 "name": "BaseBdev3", 00:11:59.602 "uuid": "fa7882ed-de1a-498e-bdaf-48452de9486f", 00:11:59.602 "is_configured": true, 00:11:59.602 "data_offset": 2048, 00:11:59.602 "data_size": 63488 00:11:59.602 }, 00:11:59.602 { 00:11:59.602 "name": "BaseBdev4", 00:11:59.602 "uuid": "9f88402e-72bf-4f01-ae0b-3237269e6a0f", 00:11:59.602 "is_configured": true, 00:11:59.602 "data_offset": 2048, 00:11:59.602 "data_size": 63488 00:11:59.602 } 00:11:59.602 ] 00:11:59.602 }' 00:11:59.602 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.602 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.170 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:00.170 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:00.170 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:00.170 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:00.170 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:00.170 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:00.170 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:00.170 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:00.170 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.170 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.170 [2024-11-15 10:57:06.889901] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:00.170 10:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.170 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:00.170 "name": "Existed_Raid", 00:12:00.170 "aliases": [ 00:12:00.170 "bba0beac-31df-4059-9f9e-d1a38770e2ed" 00:12:00.170 ], 00:12:00.170 "product_name": "Raid Volume", 00:12:00.170 "block_size": 512, 00:12:00.170 "num_blocks": 63488, 00:12:00.170 "uuid": "bba0beac-31df-4059-9f9e-d1a38770e2ed", 00:12:00.170 "assigned_rate_limits": { 00:12:00.170 "rw_ios_per_sec": 0, 00:12:00.171 "rw_mbytes_per_sec": 0, 00:12:00.171 "r_mbytes_per_sec": 0, 00:12:00.171 "w_mbytes_per_sec": 0 00:12:00.171 }, 00:12:00.171 "claimed": false, 00:12:00.171 "zoned": false, 00:12:00.171 "supported_io_types": { 00:12:00.171 "read": true, 00:12:00.171 "write": true, 00:12:00.171 "unmap": false, 00:12:00.171 "flush": false, 00:12:00.171 "reset": true, 00:12:00.171 "nvme_admin": false, 00:12:00.171 "nvme_io": false, 00:12:00.171 "nvme_io_md": false, 00:12:00.171 "write_zeroes": true, 00:12:00.171 "zcopy": false, 00:12:00.171 "get_zone_info": false, 00:12:00.171 "zone_management": false, 00:12:00.171 "zone_append": false, 00:12:00.171 "compare": false, 00:12:00.171 "compare_and_write": false, 00:12:00.171 "abort": false, 00:12:00.171 "seek_hole": false, 00:12:00.171 "seek_data": false, 00:12:00.171 "copy": false, 00:12:00.171 "nvme_iov_md": false 00:12:00.171 }, 00:12:00.171 "memory_domains": [ 00:12:00.171 { 00:12:00.171 "dma_device_id": "system", 00:12:00.171 "dma_device_type": 1 00:12:00.171 }, 00:12:00.171 { 00:12:00.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.171 "dma_device_type": 2 00:12:00.171 }, 00:12:00.171 { 00:12:00.171 "dma_device_id": "system", 00:12:00.171 "dma_device_type": 1 00:12:00.171 }, 00:12:00.171 { 00:12:00.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.171 "dma_device_type": 2 00:12:00.171 }, 00:12:00.171 { 00:12:00.171 "dma_device_id": "system", 00:12:00.171 "dma_device_type": 1 00:12:00.171 }, 00:12:00.171 { 00:12:00.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.171 "dma_device_type": 2 00:12:00.171 }, 00:12:00.171 { 00:12:00.171 "dma_device_id": "system", 00:12:00.171 "dma_device_type": 1 00:12:00.171 }, 00:12:00.171 { 00:12:00.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.171 "dma_device_type": 2 00:12:00.171 } 00:12:00.171 ], 00:12:00.171 "driver_specific": { 00:12:00.171 "raid": { 00:12:00.171 "uuid": "bba0beac-31df-4059-9f9e-d1a38770e2ed", 00:12:00.171 "strip_size_kb": 0, 00:12:00.171 "state": "online", 00:12:00.171 "raid_level": "raid1", 00:12:00.171 "superblock": true, 00:12:00.171 "num_base_bdevs": 4, 00:12:00.171 "num_base_bdevs_discovered": 4, 00:12:00.171 "num_base_bdevs_operational": 4, 00:12:00.171 "base_bdevs_list": [ 00:12:00.171 { 00:12:00.171 "name": "NewBaseBdev", 00:12:00.171 "uuid": "85c7bc3d-1b34-433a-83a5-ad17a15419c9", 00:12:00.171 "is_configured": true, 00:12:00.171 "data_offset": 2048, 00:12:00.171 "data_size": 63488 00:12:00.171 }, 00:12:00.171 { 00:12:00.171 "name": "BaseBdev2", 00:12:00.171 "uuid": "893de2e8-df37-409c-9ef7-e82976b669ae", 00:12:00.171 "is_configured": true, 00:12:00.171 "data_offset": 2048, 00:12:00.171 "data_size": 63488 00:12:00.171 }, 00:12:00.171 { 00:12:00.171 "name": "BaseBdev3", 00:12:00.171 "uuid": "fa7882ed-de1a-498e-bdaf-48452de9486f", 00:12:00.171 "is_configured": true, 00:12:00.171 "data_offset": 2048, 00:12:00.171 "data_size": 63488 00:12:00.171 }, 00:12:00.171 { 00:12:00.171 "name": "BaseBdev4", 00:12:00.171 "uuid": "9f88402e-72bf-4f01-ae0b-3237269e6a0f", 00:12:00.171 "is_configured": true, 00:12:00.171 "data_offset": 2048, 00:12:00.171 "data_size": 63488 00:12:00.171 } 00:12:00.171 ] 00:12:00.171 } 00:12:00.171 } 00:12:00.171 }' 00:12:00.171 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:00.171 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:00.171 BaseBdev2 00:12:00.171 BaseBdev3 00:12:00.171 BaseBdev4' 00:12:00.171 10:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.171 10:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:00.171 10:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.171 10:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.171 10:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:00.171 10:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.171 10:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.171 10:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.171 10:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.171 10:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.171 10:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.171 10:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:00.171 10:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.171 10:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.171 10:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.171 10:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.430 [2024-11-15 10:57:07.224376] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:00.430 [2024-11-15 10:57:07.224423] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.430 [2024-11-15 10:57:07.224528] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.430 [2024-11-15 10:57:07.224890] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.430 [2024-11-15 10:57:07.224907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74013 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 74013 ']' 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 74013 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74013 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74013' 00:12:00.430 killing process with pid 74013 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 74013 00:12:00.430 [2024-11-15 10:57:07.272397] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:00.430 10:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 74013 00:12:00.999 [2024-11-15 10:57:07.716771] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:02.383 10:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:02.383 00:12:02.383 real 0m11.830s 00:12:02.383 user 0m18.667s 00:12:02.383 sys 0m2.053s 00:12:02.383 10:57:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:02.383 10:57:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.383 ************************************ 00:12:02.383 END TEST raid_state_function_test_sb 00:12:02.383 ************************************ 00:12:02.383 10:57:09 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:02.383 10:57:09 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:02.383 10:57:09 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:02.383 10:57:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:02.383 ************************************ 00:12:02.383 START TEST raid_superblock_test 00:12:02.383 ************************************ 00:12:02.383 10:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 4 00:12:02.383 10:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:02.383 10:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:02.383 10:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:02.383 10:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:02.383 10:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:02.383 10:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:02.383 10:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:02.383 10:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:02.383 10:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:02.383 10:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:02.383 10:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:02.383 10:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:02.383 10:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:02.383 10:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:02.383 10:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:02.383 10:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74683 00:12:02.383 10:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:02.383 10:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74683 00:12:02.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.383 10:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 74683 ']' 00:12:02.383 10:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.383 10:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:02.383 10:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.383 10:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:02.383 10:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.383 [2024-11-15 10:57:09.133196] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:12:02.383 [2024-11-15 10:57:09.133451] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74683 ] 00:12:02.643 [2024-11-15 10:57:09.314360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.643 [2024-11-15 10:57:09.463513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.903 [2024-11-15 10:57:09.691851] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.903 [2024-11-15 10:57:09.692027] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.163 10:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:03.163 10:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:12:03.163 10:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:03.163 10:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:03.163 10:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:03.163 10:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:03.163 10:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:03.163 10:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:03.163 10:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:03.163 10:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:03.163 10:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:03.163 10:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.163 10:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.163 malloc1 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.163 [2024-11-15 10:57:10.018756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:03.163 [2024-11-15 10:57:10.018834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.163 [2024-11-15 10:57:10.018862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:03.163 [2024-11-15 10:57:10.018874] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.163 [2024-11-15 10:57:10.021234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.163 [2024-11-15 10:57:10.021278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:03.163 pt1 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.163 malloc2 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.163 [2024-11-15 10:57:10.074057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:03.163 [2024-11-15 10:57:10.074186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.163 [2024-11-15 10:57:10.074231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:03.163 [2024-11-15 10:57:10.074268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.163 [2024-11-15 10:57:10.076671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.163 [2024-11-15 10:57:10.076760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:03.163 pt2 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.163 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.423 malloc3 00:12:03.423 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.423 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:03.423 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.423 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.423 [2024-11-15 10:57:10.141136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:03.423 [2024-11-15 10:57:10.141258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.423 [2024-11-15 10:57:10.141320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:03.423 [2024-11-15 10:57:10.141405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.423 [2024-11-15 10:57:10.143681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.423 [2024-11-15 10:57:10.143764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:03.424 pt3 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.424 malloc4 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.424 [2024-11-15 10:57:10.202840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:03.424 [2024-11-15 10:57:10.202903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.424 [2024-11-15 10:57:10.202943] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:03.424 [2024-11-15 10:57:10.202955] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.424 [2024-11-15 10:57:10.205513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.424 [2024-11-15 10:57:10.205567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:03.424 pt4 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.424 [2024-11-15 10:57:10.214873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:03.424 [2024-11-15 10:57:10.217019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:03.424 [2024-11-15 10:57:10.217098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:03.424 [2024-11-15 10:57:10.217151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:03.424 [2024-11-15 10:57:10.217412] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:03.424 [2024-11-15 10:57:10.217435] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:03.424 [2024-11-15 10:57:10.217756] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:03.424 [2024-11-15 10:57:10.217968] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:03.424 [2024-11-15 10:57:10.217988] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:03.424 [2024-11-15 10:57:10.218163] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.424 "name": "raid_bdev1", 00:12:03.424 "uuid": "a5f029ff-4dee-4400-8b6f-40b4fb9333da", 00:12:03.424 "strip_size_kb": 0, 00:12:03.424 "state": "online", 00:12:03.424 "raid_level": "raid1", 00:12:03.424 "superblock": true, 00:12:03.424 "num_base_bdevs": 4, 00:12:03.424 "num_base_bdevs_discovered": 4, 00:12:03.424 "num_base_bdevs_operational": 4, 00:12:03.424 "base_bdevs_list": [ 00:12:03.424 { 00:12:03.424 "name": "pt1", 00:12:03.424 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:03.424 "is_configured": true, 00:12:03.424 "data_offset": 2048, 00:12:03.424 "data_size": 63488 00:12:03.424 }, 00:12:03.424 { 00:12:03.424 "name": "pt2", 00:12:03.424 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:03.424 "is_configured": true, 00:12:03.424 "data_offset": 2048, 00:12:03.424 "data_size": 63488 00:12:03.424 }, 00:12:03.424 { 00:12:03.424 "name": "pt3", 00:12:03.424 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:03.424 "is_configured": true, 00:12:03.424 "data_offset": 2048, 00:12:03.424 "data_size": 63488 00:12:03.424 }, 00:12:03.424 { 00:12:03.424 "name": "pt4", 00:12:03.424 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:03.424 "is_configured": true, 00:12:03.424 "data_offset": 2048, 00:12:03.424 "data_size": 63488 00:12:03.424 } 00:12:03.424 ] 00:12:03.424 }' 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.424 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.994 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:03.994 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:03.994 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:03.994 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:03.994 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:03.994 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:03.994 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:03.994 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:03.994 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.994 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.994 [2024-11-15 10:57:10.690441] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.994 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.994 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:03.994 "name": "raid_bdev1", 00:12:03.994 "aliases": [ 00:12:03.994 "a5f029ff-4dee-4400-8b6f-40b4fb9333da" 00:12:03.994 ], 00:12:03.994 "product_name": "Raid Volume", 00:12:03.994 "block_size": 512, 00:12:03.994 "num_blocks": 63488, 00:12:03.994 "uuid": "a5f029ff-4dee-4400-8b6f-40b4fb9333da", 00:12:03.994 "assigned_rate_limits": { 00:12:03.994 "rw_ios_per_sec": 0, 00:12:03.994 "rw_mbytes_per_sec": 0, 00:12:03.994 "r_mbytes_per_sec": 0, 00:12:03.994 "w_mbytes_per_sec": 0 00:12:03.994 }, 00:12:03.994 "claimed": false, 00:12:03.994 "zoned": false, 00:12:03.994 "supported_io_types": { 00:12:03.994 "read": true, 00:12:03.994 "write": true, 00:12:03.994 "unmap": false, 00:12:03.994 "flush": false, 00:12:03.994 "reset": true, 00:12:03.994 "nvme_admin": false, 00:12:03.994 "nvme_io": false, 00:12:03.994 "nvme_io_md": false, 00:12:03.994 "write_zeroes": true, 00:12:03.994 "zcopy": false, 00:12:03.994 "get_zone_info": false, 00:12:03.994 "zone_management": false, 00:12:03.994 "zone_append": false, 00:12:03.994 "compare": false, 00:12:03.994 "compare_and_write": false, 00:12:03.994 "abort": false, 00:12:03.994 "seek_hole": false, 00:12:03.994 "seek_data": false, 00:12:03.994 "copy": false, 00:12:03.994 "nvme_iov_md": false 00:12:03.994 }, 00:12:03.994 "memory_domains": [ 00:12:03.994 { 00:12:03.994 "dma_device_id": "system", 00:12:03.994 "dma_device_type": 1 00:12:03.994 }, 00:12:03.994 { 00:12:03.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.994 "dma_device_type": 2 00:12:03.994 }, 00:12:03.994 { 00:12:03.994 "dma_device_id": "system", 00:12:03.994 "dma_device_type": 1 00:12:03.994 }, 00:12:03.994 { 00:12:03.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.994 "dma_device_type": 2 00:12:03.994 }, 00:12:03.994 { 00:12:03.994 "dma_device_id": "system", 00:12:03.994 "dma_device_type": 1 00:12:03.994 }, 00:12:03.994 { 00:12:03.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.994 "dma_device_type": 2 00:12:03.994 }, 00:12:03.994 { 00:12:03.994 "dma_device_id": "system", 00:12:03.994 "dma_device_type": 1 00:12:03.994 }, 00:12:03.994 { 00:12:03.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.994 "dma_device_type": 2 00:12:03.994 } 00:12:03.994 ], 00:12:03.994 "driver_specific": { 00:12:03.994 "raid": { 00:12:03.994 "uuid": "a5f029ff-4dee-4400-8b6f-40b4fb9333da", 00:12:03.994 "strip_size_kb": 0, 00:12:03.994 "state": "online", 00:12:03.994 "raid_level": "raid1", 00:12:03.994 "superblock": true, 00:12:03.994 "num_base_bdevs": 4, 00:12:03.994 "num_base_bdevs_discovered": 4, 00:12:03.994 "num_base_bdevs_operational": 4, 00:12:03.994 "base_bdevs_list": [ 00:12:03.994 { 00:12:03.994 "name": "pt1", 00:12:03.994 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:03.994 "is_configured": true, 00:12:03.994 "data_offset": 2048, 00:12:03.994 "data_size": 63488 00:12:03.994 }, 00:12:03.994 { 00:12:03.994 "name": "pt2", 00:12:03.994 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:03.994 "is_configured": true, 00:12:03.994 "data_offset": 2048, 00:12:03.994 "data_size": 63488 00:12:03.994 }, 00:12:03.994 { 00:12:03.994 "name": "pt3", 00:12:03.994 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:03.994 "is_configured": true, 00:12:03.994 "data_offset": 2048, 00:12:03.994 "data_size": 63488 00:12:03.994 }, 00:12:03.994 { 00:12:03.994 "name": "pt4", 00:12:03.994 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:03.994 "is_configured": true, 00:12:03.994 "data_offset": 2048, 00:12:03.994 "data_size": 63488 00:12:03.994 } 00:12:03.994 ] 00:12:03.994 } 00:12:03.994 } 00:12:03.994 }' 00:12:03.994 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:03.994 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:03.994 pt2 00:12:03.994 pt3 00:12:03.994 pt4' 00:12:03.994 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.994 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:03.994 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.994 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:03.994 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.994 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.994 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.995 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.995 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.995 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.995 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.995 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:03.995 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.995 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.995 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.995 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.255 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.255 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.255 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.255 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:04.255 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.255 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.255 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.255 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.255 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.255 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.255 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.255 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:04.255 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.255 10:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.255 10:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.255 [2024-11-15 10:57:11.045796] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a5f029ff-4dee-4400-8b6f-40b4fb9333da 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a5f029ff-4dee-4400-8b6f-40b4fb9333da ']' 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.255 [2024-11-15 10:57:11.093449] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:04.255 [2024-11-15 10:57:11.093541] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:04.255 [2024-11-15 10:57:11.093678] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:04.255 [2024-11-15 10:57:11.093812] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:04.255 [2024-11-15 10:57:11.093881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.255 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.515 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.515 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:04.515 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:04.515 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.515 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.515 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.515 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:04.515 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:04.515 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:04.515 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:04.515 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:04.515 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:04.515 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:04.515 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:04.515 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:04.515 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.515 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.515 [2024-11-15 10:57:11.241199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:04.515 [2024-11-15 10:57:11.243130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:04.515 [2024-11-15 10:57:11.243181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:04.515 [2024-11-15 10:57:11.243227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:04.515 [2024-11-15 10:57:11.243287] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:04.515 [2024-11-15 10:57:11.243381] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:04.515 [2024-11-15 10:57:11.243406] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:04.515 [2024-11-15 10:57:11.243429] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:04.515 [2024-11-15 10:57:11.243445] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:04.515 [2024-11-15 10:57:11.243458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:04.515 request: 00:12:04.515 { 00:12:04.515 "name": "raid_bdev1", 00:12:04.515 "raid_level": "raid1", 00:12:04.515 "base_bdevs": [ 00:12:04.515 "malloc1", 00:12:04.515 "malloc2", 00:12:04.515 "malloc3", 00:12:04.515 "malloc4" 00:12:04.515 ], 00:12:04.515 "superblock": false, 00:12:04.515 "method": "bdev_raid_create", 00:12:04.515 "req_id": 1 00:12:04.515 } 00:12:04.515 Got JSON-RPC error response 00:12:04.515 response: 00:12:04.515 { 00:12:04.515 "code": -17, 00:12:04.515 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:04.515 } 00:12:04.515 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:04.515 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:04.515 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:04.515 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:04.515 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:04.515 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.516 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.516 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:04.516 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.516 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.516 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:04.516 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:04.516 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:04.516 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.516 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.516 [2024-11-15 10:57:11.305063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:04.516 [2024-11-15 10:57:11.305243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.516 [2024-11-15 10:57:11.305290] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:04.516 [2024-11-15 10:57:11.305351] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.516 [2024-11-15 10:57:11.307853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.516 [2024-11-15 10:57:11.307970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:04.516 [2024-11-15 10:57:11.308115] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:04.516 [2024-11-15 10:57:11.308228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:04.516 pt1 00:12:04.516 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.516 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:04.516 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.516 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.516 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.516 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.516 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.516 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.516 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.516 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.516 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.516 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.516 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.516 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.516 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.516 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.516 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.516 "name": "raid_bdev1", 00:12:04.516 "uuid": "a5f029ff-4dee-4400-8b6f-40b4fb9333da", 00:12:04.516 "strip_size_kb": 0, 00:12:04.516 "state": "configuring", 00:12:04.516 "raid_level": "raid1", 00:12:04.516 "superblock": true, 00:12:04.516 "num_base_bdevs": 4, 00:12:04.516 "num_base_bdevs_discovered": 1, 00:12:04.516 "num_base_bdevs_operational": 4, 00:12:04.516 "base_bdevs_list": [ 00:12:04.516 { 00:12:04.516 "name": "pt1", 00:12:04.516 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:04.516 "is_configured": true, 00:12:04.516 "data_offset": 2048, 00:12:04.516 "data_size": 63488 00:12:04.516 }, 00:12:04.516 { 00:12:04.516 "name": null, 00:12:04.516 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:04.516 "is_configured": false, 00:12:04.516 "data_offset": 2048, 00:12:04.516 "data_size": 63488 00:12:04.516 }, 00:12:04.516 { 00:12:04.516 "name": null, 00:12:04.516 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:04.516 "is_configured": false, 00:12:04.516 "data_offset": 2048, 00:12:04.516 "data_size": 63488 00:12:04.516 }, 00:12:04.516 { 00:12:04.516 "name": null, 00:12:04.516 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:04.516 "is_configured": false, 00:12:04.516 "data_offset": 2048, 00:12:04.516 "data_size": 63488 00:12:04.516 } 00:12:04.516 ] 00:12:04.516 }' 00:12:04.516 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.516 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.084 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:05.084 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:05.084 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.084 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.084 [2024-11-15 10:57:11.804231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:05.084 [2024-11-15 10:57:11.804324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.084 [2024-11-15 10:57:11.804349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:05.084 [2024-11-15 10:57:11.804363] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.084 [2024-11-15 10:57:11.804855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.084 [2024-11-15 10:57:11.804895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:05.084 [2024-11-15 10:57:11.804990] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:05.084 [2024-11-15 10:57:11.805029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:05.084 pt2 00:12:05.084 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.084 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:05.084 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.084 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.084 [2024-11-15 10:57:11.816187] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:05.084 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.084 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:05.084 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.084 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.084 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.084 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.084 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.084 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.084 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.084 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.084 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.084 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.084 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.084 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.084 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.084 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.084 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.084 "name": "raid_bdev1", 00:12:05.084 "uuid": "a5f029ff-4dee-4400-8b6f-40b4fb9333da", 00:12:05.084 "strip_size_kb": 0, 00:12:05.084 "state": "configuring", 00:12:05.084 "raid_level": "raid1", 00:12:05.084 "superblock": true, 00:12:05.084 "num_base_bdevs": 4, 00:12:05.084 "num_base_bdevs_discovered": 1, 00:12:05.084 "num_base_bdevs_operational": 4, 00:12:05.084 "base_bdevs_list": [ 00:12:05.084 { 00:12:05.084 "name": "pt1", 00:12:05.084 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:05.084 "is_configured": true, 00:12:05.084 "data_offset": 2048, 00:12:05.084 "data_size": 63488 00:12:05.084 }, 00:12:05.084 { 00:12:05.084 "name": null, 00:12:05.084 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:05.084 "is_configured": false, 00:12:05.084 "data_offset": 0, 00:12:05.084 "data_size": 63488 00:12:05.084 }, 00:12:05.084 { 00:12:05.084 "name": null, 00:12:05.084 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:05.084 "is_configured": false, 00:12:05.084 "data_offset": 2048, 00:12:05.084 "data_size": 63488 00:12:05.084 }, 00:12:05.084 { 00:12:05.084 "name": null, 00:12:05.084 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:05.084 "is_configured": false, 00:12:05.084 "data_offset": 2048, 00:12:05.084 "data_size": 63488 00:12:05.084 } 00:12:05.084 ] 00:12:05.084 }' 00:12:05.084 10:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.084 10:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.653 [2024-11-15 10:57:12.307432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:05.653 [2024-11-15 10:57:12.307512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.653 [2024-11-15 10:57:12.307544] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:05.653 [2024-11-15 10:57:12.307558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.653 [2024-11-15 10:57:12.308069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.653 [2024-11-15 10:57:12.308091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:05.653 [2024-11-15 10:57:12.308203] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:05.653 [2024-11-15 10:57:12.308231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:05.653 pt2 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.653 [2024-11-15 10:57:12.319387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:05.653 [2024-11-15 10:57:12.319494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.653 [2024-11-15 10:57:12.319536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:05.653 [2024-11-15 10:57:12.319570] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.653 [2024-11-15 10:57:12.320049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.653 [2024-11-15 10:57:12.320138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:05.653 [2024-11-15 10:57:12.320267] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:05.653 [2024-11-15 10:57:12.320347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:05.653 pt3 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.653 [2024-11-15 10:57:12.331323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:05.653 [2024-11-15 10:57:12.331408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.653 [2024-11-15 10:57:12.331462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:05.653 [2024-11-15 10:57:12.331495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.653 [2024-11-15 10:57:12.331937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.653 [2024-11-15 10:57:12.332006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:05.653 [2024-11-15 10:57:12.332112] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:05.653 [2024-11-15 10:57:12.332168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:05.653 [2024-11-15 10:57:12.332391] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:05.653 [2024-11-15 10:57:12.332442] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:05.653 [2024-11-15 10:57:12.332753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:05.653 [2024-11-15 10:57:12.332985] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:05.653 [2024-11-15 10:57:12.333044] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:05.653 [2024-11-15 10:57:12.333247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.653 pt4 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.653 "name": "raid_bdev1", 00:12:05.653 "uuid": "a5f029ff-4dee-4400-8b6f-40b4fb9333da", 00:12:05.653 "strip_size_kb": 0, 00:12:05.653 "state": "online", 00:12:05.653 "raid_level": "raid1", 00:12:05.653 "superblock": true, 00:12:05.653 "num_base_bdevs": 4, 00:12:05.653 "num_base_bdevs_discovered": 4, 00:12:05.653 "num_base_bdevs_operational": 4, 00:12:05.653 "base_bdevs_list": [ 00:12:05.653 { 00:12:05.653 "name": "pt1", 00:12:05.653 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:05.653 "is_configured": true, 00:12:05.653 "data_offset": 2048, 00:12:05.653 "data_size": 63488 00:12:05.653 }, 00:12:05.653 { 00:12:05.653 "name": "pt2", 00:12:05.653 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:05.653 "is_configured": true, 00:12:05.653 "data_offset": 2048, 00:12:05.653 "data_size": 63488 00:12:05.653 }, 00:12:05.653 { 00:12:05.653 "name": "pt3", 00:12:05.653 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:05.653 "is_configured": true, 00:12:05.653 "data_offset": 2048, 00:12:05.653 "data_size": 63488 00:12:05.653 }, 00:12:05.653 { 00:12:05.653 "name": "pt4", 00:12:05.653 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:05.653 "is_configured": true, 00:12:05.653 "data_offset": 2048, 00:12:05.653 "data_size": 63488 00:12:05.653 } 00:12:05.653 ] 00:12:05.653 }' 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.653 10:57:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.915 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:05.915 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:05.915 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:05.915 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:05.915 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:05.915 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:05.915 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:05.915 10:57:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.915 10:57:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.915 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:06.175 [2024-11-15 10:57:12.838832] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:06.175 10:57:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.175 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:06.175 "name": "raid_bdev1", 00:12:06.175 "aliases": [ 00:12:06.175 "a5f029ff-4dee-4400-8b6f-40b4fb9333da" 00:12:06.175 ], 00:12:06.175 "product_name": "Raid Volume", 00:12:06.175 "block_size": 512, 00:12:06.175 "num_blocks": 63488, 00:12:06.175 "uuid": "a5f029ff-4dee-4400-8b6f-40b4fb9333da", 00:12:06.175 "assigned_rate_limits": { 00:12:06.175 "rw_ios_per_sec": 0, 00:12:06.175 "rw_mbytes_per_sec": 0, 00:12:06.175 "r_mbytes_per_sec": 0, 00:12:06.175 "w_mbytes_per_sec": 0 00:12:06.175 }, 00:12:06.175 "claimed": false, 00:12:06.175 "zoned": false, 00:12:06.175 "supported_io_types": { 00:12:06.175 "read": true, 00:12:06.175 "write": true, 00:12:06.175 "unmap": false, 00:12:06.175 "flush": false, 00:12:06.175 "reset": true, 00:12:06.175 "nvme_admin": false, 00:12:06.175 "nvme_io": false, 00:12:06.175 "nvme_io_md": false, 00:12:06.175 "write_zeroes": true, 00:12:06.175 "zcopy": false, 00:12:06.175 "get_zone_info": false, 00:12:06.175 "zone_management": false, 00:12:06.175 "zone_append": false, 00:12:06.175 "compare": false, 00:12:06.175 "compare_and_write": false, 00:12:06.175 "abort": false, 00:12:06.175 "seek_hole": false, 00:12:06.175 "seek_data": false, 00:12:06.175 "copy": false, 00:12:06.175 "nvme_iov_md": false 00:12:06.175 }, 00:12:06.175 "memory_domains": [ 00:12:06.175 { 00:12:06.175 "dma_device_id": "system", 00:12:06.175 "dma_device_type": 1 00:12:06.175 }, 00:12:06.175 { 00:12:06.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.175 "dma_device_type": 2 00:12:06.175 }, 00:12:06.175 { 00:12:06.175 "dma_device_id": "system", 00:12:06.175 "dma_device_type": 1 00:12:06.175 }, 00:12:06.175 { 00:12:06.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.175 "dma_device_type": 2 00:12:06.175 }, 00:12:06.175 { 00:12:06.175 "dma_device_id": "system", 00:12:06.175 "dma_device_type": 1 00:12:06.175 }, 00:12:06.175 { 00:12:06.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.175 "dma_device_type": 2 00:12:06.175 }, 00:12:06.175 { 00:12:06.175 "dma_device_id": "system", 00:12:06.175 "dma_device_type": 1 00:12:06.175 }, 00:12:06.175 { 00:12:06.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.175 "dma_device_type": 2 00:12:06.175 } 00:12:06.175 ], 00:12:06.175 "driver_specific": { 00:12:06.175 "raid": { 00:12:06.175 "uuid": "a5f029ff-4dee-4400-8b6f-40b4fb9333da", 00:12:06.175 "strip_size_kb": 0, 00:12:06.175 "state": "online", 00:12:06.175 "raid_level": "raid1", 00:12:06.175 "superblock": true, 00:12:06.175 "num_base_bdevs": 4, 00:12:06.175 "num_base_bdevs_discovered": 4, 00:12:06.175 "num_base_bdevs_operational": 4, 00:12:06.175 "base_bdevs_list": [ 00:12:06.175 { 00:12:06.175 "name": "pt1", 00:12:06.175 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:06.175 "is_configured": true, 00:12:06.175 "data_offset": 2048, 00:12:06.175 "data_size": 63488 00:12:06.175 }, 00:12:06.175 { 00:12:06.175 "name": "pt2", 00:12:06.175 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:06.175 "is_configured": true, 00:12:06.175 "data_offset": 2048, 00:12:06.175 "data_size": 63488 00:12:06.175 }, 00:12:06.175 { 00:12:06.175 "name": "pt3", 00:12:06.175 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:06.175 "is_configured": true, 00:12:06.175 "data_offset": 2048, 00:12:06.175 "data_size": 63488 00:12:06.175 }, 00:12:06.175 { 00:12:06.175 "name": "pt4", 00:12:06.175 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:06.175 "is_configured": true, 00:12:06.175 "data_offset": 2048, 00:12:06.175 "data_size": 63488 00:12:06.175 } 00:12:06.175 ] 00:12:06.175 } 00:12:06.175 } 00:12:06.175 }' 00:12:06.175 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:06.175 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:06.175 pt2 00:12:06.175 pt3 00:12:06.175 pt4' 00:12:06.175 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.175 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:06.175 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.175 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:06.175 10:57:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.175 10:57:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.175 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.175 10:57:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.175 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.175 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.175 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.175 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.175 10:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:06.175 10:57:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.175 10:57:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.175 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.175 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.176 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.176 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.176 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:06.176 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.176 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.176 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.176 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.176 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.176 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.176 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.176 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:06.176 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.176 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.176 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.435 [2024-11-15 10:57:13.162299] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a5f029ff-4dee-4400-8b6f-40b4fb9333da '!=' a5f029ff-4dee-4400-8b6f-40b4fb9333da ']' 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.435 [2024-11-15 10:57:13.201956] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.435 "name": "raid_bdev1", 00:12:06.435 "uuid": "a5f029ff-4dee-4400-8b6f-40b4fb9333da", 00:12:06.435 "strip_size_kb": 0, 00:12:06.435 "state": "online", 00:12:06.435 "raid_level": "raid1", 00:12:06.435 "superblock": true, 00:12:06.435 "num_base_bdevs": 4, 00:12:06.435 "num_base_bdevs_discovered": 3, 00:12:06.435 "num_base_bdevs_operational": 3, 00:12:06.435 "base_bdevs_list": [ 00:12:06.435 { 00:12:06.435 "name": null, 00:12:06.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.435 "is_configured": false, 00:12:06.435 "data_offset": 0, 00:12:06.435 "data_size": 63488 00:12:06.435 }, 00:12:06.435 { 00:12:06.435 "name": "pt2", 00:12:06.435 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:06.435 "is_configured": true, 00:12:06.435 "data_offset": 2048, 00:12:06.435 "data_size": 63488 00:12:06.435 }, 00:12:06.435 { 00:12:06.435 "name": "pt3", 00:12:06.435 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:06.435 "is_configured": true, 00:12:06.435 "data_offset": 2048, 00:12:06.435 "data_size": 63488 00:12:06.435 }, 00:12:06.435 { 00:12:06.435 "name": "pt4", 00:12:06.435 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:06.435 "is_configured": true, 00:12:06.435 "data_offset": 2048, 00:12:06.435 "data_size": 63488 00:12:06.435 } 00:12:06.435 ] 00:12:06.435 }' 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.435 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.695 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:06.695 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.695 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.695 [2024-11-15 10:57:13.593220] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:06.695 [2024-11-15 10:57:13.593342] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:06.695 [2024-11-15 10:57:13.593466] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:06.695 [2024-11-15 10:57:13.593581] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:06.695 [2024-11-15 10:57:13.593646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:06.695 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.695 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.695 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:06.695 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.695 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.695 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.955 [2024-11-15 10:57:13.693068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:06.955 [2024-11-15 10:57:13.693186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.955 [2024-11-15 10:57:13.693215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:06.955 [2024-11-15 10:57:13.693227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.955 [2024-11-15 10:57:13.695797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.955 [2024-11-15 10:57:13.695845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:06.955 [2024-11-15 10:57:13.695957] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:06.955 [2024-11-15 10:57:13.696013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:06.955 pt2 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.955 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.956 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.956 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.956 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.956 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.956 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.956 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.956 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.956 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.956 "name": "raid_bdev1", 00:12:06.956 "uuid": "a5f029ff-4dee-4400-8b6f-40b4fb9333da", 00:12:06.956 "strip_size_kb": 0, 00:12:06.956 "state": "configuring", 00:12:06.956 "raid_level": "raid1", 00:12:06.956 "superblock": true, 00:12:06.956 "num_base_bdevs": 4, 00:12:06.956 "num_base_bdevs_discovered": 1, 00:12:06.956 "num_base_bdevs_operational": 3, 00:12:06.956 "base_bdevs_list": [ 00:12:06.956 { 00:12:06.956 "name": null, 00:12:06.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.956 "is_configured": false, 00:12:06.956 "data_offset": 2048, 00:12:06.956 "data_size": 63488 00:12:06.956 }, 00:12:06.956 { 00:12:06.956 "name": "pt2", 00:12:06.956 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:06.956 "is_configured": true, 00:12:06.956 "data_offset": 2048, 00:12:06.956 "data_size": 63488 00:12:06.956 }, 00:12:06.956 { 00:12:06.956 "name": null, 00:12:06.956 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:06.956 "is_configured": false, 00:12:06.956 "data_offset": 2048, 00:12:06.956 "data_size": 63488 00:12:06.956 }, 00:12:06.956 { 00:12:06.956 "name": null, 00:12:06.956 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:06.956 "is_configured": false, 00:12:06.956 "data_offset": 2048, 00:12:06.956 "data_size": 63488 00:12:06.956 } 00:12:06.956 ] 00:12:06.956 }' 00:12:06.956 10:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.956 10:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.215 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:07.215 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:07.215 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:07.215 10:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.215 10:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.215 [2024-11-15 10:57:14.120391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:07.215 [2024-11-15 10:57:14.120474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.215 [2024-11-15 10:57:14.120502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:07.215 [2024-11-15 10:57:14.120514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.215 [2024-11-15 10:57:14.121031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.215 [2024-11-15 10:57:14.121064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:07.215 [2024-11-15 10:57:14.121167] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:07.215 [2024-11-15 10:57:14.121192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:07.215 pt3 00:12:07.215 10:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.215 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:07.215 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.215 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.215 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.215 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.215 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:07.215 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.215 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.215 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.215 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.215 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.215 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.216 10:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.216 10:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.475 10:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.475 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.475 "name": "raid_bdev1", 00:12:07.475 "uuid": "a5f029ff-4dee-4400-8b6f-40b4fb9333da", 00:12:07.475 "strip_size_kb": 0, 00:12:07.475 "state": "configuring", 00:12:07.475 "raid_level": "raid1", 00:12:07.475 "superblock": true, 00:12:07.475 "num_base_bdevs": 4, 00:12:07.475 "num_base_bdevs_discovered": 2, 00:12:07.475 "num_base_bdevs_operational": 3, 00:12:07.475 "base_bdevs_list": [ 00:12:07.475 { 00:12:07.475 "name": null, 00:12:07.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.475 "is_configured": false, 00:12:07.475 "data_offset": 2048, 00:12:07.475 "data_size": 63488 00:12:07.475 }, 00:12:07.475 { 00:12:07.475 "name": "pt2", 00:12:07.475 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:07.475 "is_configured": true, 00:12:07.475 "data_offset": 2048, 00:12:07.475 "data_size": 63488 00:12:07.475 }, 00:12:07.475 { 00:12:07.475 "name": "pt3", 00:12:07.475 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:07.475 "is_configured": true, 00:12:07.475 "data_offset": 2048, 00:12:07.475 "data_size": 63488 00:12:07.475 }, 00:12:07.475 { 00:12:07.475 "name": null, 00:12:07.475 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:07.475 "is_configured": false, 00:12:07.475 "data_offset": 2048, 00:12:07.475 "data_size": 63488 00:12:07.475 } 00:12:07.475 ] 00:12:07.475 }' 00:12:07.475 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.475 10:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.733 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:07.733 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:07.733 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:07.733 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:07.733 10:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.733 10:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.733 [2024-11-15 10:57:14.567712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:07.733 [2024-11-15 10:57:14.567841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.733 [2024-11-15 10:57:14.567912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:07.733 [2024-11-15 10:57:14.567949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.733 [2024-11-15 10:57:14.568463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.733 [2024-11-15 10:57:14.568529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:07.733 [2024-11-15 10:57:14.568659] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:07.733 [2024-11-15 10:57:14.568729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:07.733 [2024-11-15 10:57:14.568921] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:07.733 [2024-11-15 10:57:14.568966] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:07.733 [2024-11-15 10:57:14.569260] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:07.734 [2024-11-15 10:57:14.569501] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:07.734 [2024-11-15 10:57:14.569556] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:07.734 [2024-11-15 10:57:14.569771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.734 pt4 00:12:07.734 10:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.734 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:07.734 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.734 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.734 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.734 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.734 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:07.734 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.734 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.734 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.734 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.734 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.734 10:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.734 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.734 10:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.734 10:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.734 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.734 "name": "raid_bdev1", 00:12:07.734 "uuid": "a5f029ff-4dee-4400-8b6f-40b4fb9333da", 00:12:07.734 "strip_size_kb": 0, 00:12:07.734 "state": "online", 00:12:07.734 "raid_level": "raid1", 00:12:07.734 "superblock": true, 00:12:07.734 "num_base_bdevs": 4, 00:12:07.734 "num_base_bdevs_discovered": 3, 00:12:07.734 "num_base_bdevs_operational": 3, 00:12:07.734 "base_bdevs_list": [ 00:12:07.734 { 00:12:07.734 "name": null, 00:12:07.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.734 "is_configured": false, 00:12:07.734 "data_offset": 2048, 00:12:07.734 "data_size": 63488 00:12:07.734 }, 00:12:07.734 { 00:12:07.734 "name": "pt2", 00:12:07.734 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:07.734 "is_configured": true, 00:12:07.734 "data_offset": 2048, 00:12:07.734 "data_size": 63488 00:12:07.734 }, 00:12:07.734 { 00:12:07.734 "name": "pt3", 00:12:07.734 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:07.734 "is_configured": true, 00:12:07.734 "data_offset": 2048, 00:12:07.734 "data_size": 63488 00:12:07.734 }, 00:12:07.734 { 00:12:07.734 "name": "pt4", 00:12:07.734 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:07.734 "is_configured": true, 00:12:07.734 "data_offset": 2048, 00:12:07.734 "data_size": 63488 00:12:07.734 } 00:12:07.734 ] 00:12:07.734 }' 00:12:07.734 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.734 10:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.303 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:08.303 10:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.303 10:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.303 [2024-11-15 10:57:14.990961] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:08.303 [2024-11-15 10:57:14.991061] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:08.303 [2024-11-15 10:57:14.991171] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:08.303 [2024-11-15 10:57:14.991267] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:08.303 [2024-11-15 10:57:14.991362] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:08.303 10:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.303 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:08.303 10:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.303 10:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.303 10:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.303 [2024-11-15 10:57:15.054842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:08.303 [2024-11-15 10:57:15.054936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.303 [2024-11-15 10:57:15.054958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:08.303 [2024-11-15 10:57:15.054973] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.303 [2024-11-15 10:57:15.057450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.303 [2024-11-15 10:57:15.057555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:08.303 [2024-11-15 10:57:15.057657] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:08.303 [2024-11-15 10:57:15.057734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:08.303 [2024-11-15 10:57:15.057882] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:08.303 [2024-11-15 10:57:15.057896] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:08.303 [2024-11-15 10:57:15.057913] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:08.303 [2024-11-15 10:57:15.057995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:08.303 [2024-11-15 10:57:15.058111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:08.303 pt1 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.303 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.303 "name": "raid_bdev1", 00:12:08.303 "uuid": "a5f029ff-4dee-4400-8b6f-40b4fb9333da", 00:12:08.303 "strip_size_kb": 0, 00:12:08.303 "state": "configuring", 00:12:08.303 "raid_level": "raid1", 00:12:08.303 "superblock": true, 00:12:08.303 "num_base_bdevs": 4, 00:12:08.303 "num_base_bdevs_discovered": 2, 00:12:08.303 "num_base_bdevs_operational": 3, 00:12:08.303 "base_bdevs_list": [ 00:12:08.303 { 00:12:08.303 "name": null, 00:12:08.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.303 "is_configured": false, 00:12:08.303 "data_offset": 2048, 00:12:08.303 "data_size": 63488 00:12:08.303 }, 00:12:08.303 { 00:12:08.303 "name": "pt2", 00:12:08.303 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:08.303 "is_configured": true, 00:12:08.303 "data_offset": 2048, 00:12:08.303 "data_size": 63488 00:12:08.303 }, 00:12:08.303 { 00:12:08.303 "name": "pt3", 00:12:08.303 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:08.303 "is_configured": true, 00:12:08.303 "data_offset": 2048, 00:12:08.303 "data_size": 63488 00:12:08.303 }, 00:12:08.303 { 00:12:08.303 "name": null, 00:12:08.303 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:08.303 "is_configured": false, 00:12:08.304 "data_offset": 2048, 00:12:08.304 "data_size": 63488 00:12:08.304 } 00:12:08.304 ] 00:12:08.304 }' 00:12:08.304 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.304 10:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.562 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:08.562 10:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.562 10:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.562 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:08.562 10:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.562 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:08.563 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:08.563 10:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.563 10:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.563 [2024-11-15 10:57:15.470208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:08.563 [2024-11-15 10:57:15.470349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.563 [2024-11-15 10:57:15.470410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:08.563 [2024-11-15 10:57:15.470450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.563 [2024-11-15 10:57:15.470942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.563 [2024-11-15 10:57:15.471013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:08.563 [2024-11-15 10:57:15.471146] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:08.563 [2024-11-15 10:57:15.471216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:08.563 [2024-11-15 10:57:15.471426] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:08.563 [2024-11-15 10:57:15.471471] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:08.563 [2024-11-15 10:57:15.471765] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:08.563 [2024-11-15 10:57:15.471994] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:08.563 [2024-11-15 10:57:15.472050] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:08.563 [2024-11-15 10:57:15.472284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.563 pt4 00:12:08.563 10:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.563 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:08.563 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.563 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.563 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.563 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.563 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.563 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.563 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.563 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.563 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.563 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.563 10:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.563 10:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.563 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.822 10:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.822 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.822 "name": "raid_bdev1", 00:12:08.822 "uuid": "a5f029ff-4dee-4400-8b6f-40b4fb9333da", 00:12:08.822 "strip_size_kb": 0, 00:12:08.822 "state": "online", 00:12:08.822 "raid_level": "raid1", 00:12:08.822 "superblock": true, 00:12:08.822 "num_base_bdevs": 4, 00:12:08.822 "num_base_bdevs_discovered": 3, 00:12:08.822 "num_base_bdevs_operational": 3, 00:12:08.822 "base_bdevs_list": [ 00:12:08.822 { 00:12:08.822 "name": null, 00:12:08.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.822 "is_configured": false, 00:12:08.822 "data_offset": 2048, 00:12:08.822 "data_size": 63488 00:12:08.822 }, 00:12:08.822 { 00:12:08.822 "name": "pt2", 00:12:08.822 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:08.822 "is_configured": true, 00:12:08.822 "data_offset": 2048, 00:12:08.822 "data_size": 63488 00:12:08.822 }, 00:12:08.822 { 00:12:08.822 "name": "pt3", 00:12:08.822 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:08.822 "is_configured": true, 00:12:08.822 "data_offset": 2048, 00:12:08.822 "data_size": 63488 00:12:08.822 }, 00:12:08.822 { 00:12:08.822 "name": "pt4", 00:12:08.822 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:08.822 "is_configured": true, 00:12:08.822 "data_offset": 2048, 00:12:08.822 "data_size": 63488 00:12:08.822 } 00:12:08.822 ] 00:12:08.822 }' 00:12:08.822 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.822 10:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.081 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:09.081 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:09.081 10:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.081 10:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.081 10:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.081 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:09.081 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:09.081 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:09.081 10:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.081 10:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.081 [2024-11-15 10:57:15.961762] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:09.081 10:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.081 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a5f029ff-4dee-4400-8b6f-40b4fb9333da '!=' a5f029ff-4dee-4400-8b6f-40b4fb9333da ']' 00:12:09.081 10:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74683 00:12:09.081 10:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 74683 ']' 00:12:09.081 10:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 74683 00:12:09.081 10:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:12:09.339 10:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:09.339 10:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74683 00:12:09.339 10:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:09.339 10:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:09.339 10:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74683' 00:12:09.339 killing process with pid 74683 00:12:09.339 10:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 74683 00:12:09.339 [2024-11-15 10:57:16.045383] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:09.339 [2024-11-15 10:57:16.045503] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:09.339 10:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 74683 00:12:09.339 [2024-11-15 10:57:16.045597] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:09.339 [2024-11-15 10:57:16.045614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:09.597 [2024-11-15 10:57:16.475792] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:10.972 10:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:10.972 ************************************ 00:12:10.972 END TEST raid_superblock_test 00:12:10.972 ************************************ 00:12:10.972 00:12:10.972 real 0m8.583s 00:12:10.972 user 0m13.387s 00:12:10.972 sys 0m1.599s 00:12:10.972 10:57:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:10.972 10:57:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.972 10:57:17 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:10.972 10:57:17 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:10.972 10:57:17 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:10.972 10:57:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:10.972 ************************************ 00:12:10.972 START TEST raid_read_error_test 00:12:10.972 ************************************ 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 read 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kqyFtXtWct 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75176 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75176 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 75176 ']' 00:12:10.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:10.972 10:57:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.972 [2024-11-15 10:57:17.790807] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:12:10.972 [2024-11-15 10:57:17.790926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75176 ] 00:12:11.231 [2024-11-15 10:57:17.945746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.231 [2024-11-15 10:57:18.065255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.490 [2024-11-15 10:57:18.275133] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:11.490 [2024-11-15 10:57:18.275200] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:11.749 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:11.749 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:11.749 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:11.749 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:11.749 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.749 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.008 BaseBdev1_malloc 00:12:12.008 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.008 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:12.008 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.008 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.008 true 00:12:12.008 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.008 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:12.008 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.008 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.008 [2024-11-15 10:57:18.699257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:12.008 [2024-11-15 10:57:18.699330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.008 [2024-11-15 10:57:18.699353] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:12.008 [2024-11-15 10:57:18.699377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.008 [2024-11-15 10:57:18.701517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.008 [2024-11-15 10:57:18.701565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:12.008 BaseBdev1 00:12:12.008 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.008 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:12.008 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:12.008 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.008 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.008 BaseBdev2_malloc 00:12:12.008 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.008 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:12.008 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.008 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.008 true 00:12:12.008 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.008 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:12.008 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.008 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.008 [2024-11-15 10:57:18.768744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:12.008 [2024-11-15 10:57:18.768857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.008 [2024-11-15 10:57:18.768899] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:12.008 [2024-11-15 10:57:18.768912] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.008 [2024-11-15 10:57:18.771120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.009 [2024-11-15 10:57:18.771165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:12.009 BaseBdev2 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.009 BaseBdev3_malloc 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.009 true 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.009 [2024-11-15 10:57:18.846245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:12.009 [2024-11-15 10:57:18.846376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.009 [2024-11-15 10:57:18.846401] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:12.009 [2024-11-15 10:57:18.846414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.009 [2024-11-15 10:57:18.848709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.009 [2024-11-15 10:57:18.848756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:12.009 BaseBdev3 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.009 BaseBdev4_malloc 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.009 true 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.009 [2024-11-15 10:57:18.912754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:12.009 [2024-11-15 10:57:18.912878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.009 [2024-11-15 10:57:18.912903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:12.009 [2024-11-15 10:57:18.912917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.009 [2024-11-15 10:57:18.915023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.009 [2024-11-15 10:57:18.915070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:12.009 BaseBdev4 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.009 [2024-11-15 10:57:18.924801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:12.009 [2024-11-15 10:57:18.926783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:12.009 [2024-11-15 10:57:18.926872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:12.009 [2024-11-15 10:57:18.926954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:12.009 [2024-11-15 10:57:18.927227] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:12.009 [2024-11-15 10:57:18.927243] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:12.009 [2024-11-15 10:57:18.927515] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:12.009 [2024-11-15 10:57:18.927696] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:12.009 [2024-11-15 10:57:18.927706] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:12.009 [2024-11-15 10:57:18.927879] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.009 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.269 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.269 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.269 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.269 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.269 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.269 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.269 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.269 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.269 "name": "raid_bdev1", 00:12:12.269 "uuid": "231dab10-459e-4fb1-ac15-9294c366a42d", 00:12:12.269 "strip_size_kb": 0, 00:12:12.269 "state": "online", 00:12:12.269 "raid_level": "raid1", 00:12:12.269 "superblock": true, 00:12:12.269 "num_base_bdevs": 4, 00:12:12.269 "num_base_bdevs_discovered": 4, 00:12:12.269 "num_base_bdevs_operational": 4, 00:12:12.269 "base_bdevs_list": [ 00:12:12.269 { 00:12:12.269 "name": "BaseBdev1", 00:12:12.269 "uuid": "3ea4c8d2-ee87-5394-ba8d-64a33510717a", 00:12:12.269 "is_configured": true, 00:12:12.269 "data_offset": 2048, 00:12:12.269 "data_size": 63488 00:12:12.269 }, 00:12:12.269 { 00:12:12.269 "name": "BaseBdev2", 00:12:12.269 "uuid": "29fbe57b-496c-560d-a61b-5bda7d8a7b54", 00:12:12.269 "is_configured": true, 00:12:12.269 "data_offset": 2048, 00:12:12.269 "data_size": 63488 00:12:12.269 }, 00:12:12.269 { 00:12:12.269 "name": "BaseBdev3", 00:12:12.269 "uuid": "d0f58d65-e939-58b9-8842-3cb9ed174b67", 00:12:12.269 "is_configured": true, 00:12:12.269 "data_offset": 2048, 00:12:12.269 "data_size": 63488 00:12:12.269 }, 00:12:12.269 { 00:12:12.269 "name": "BaseBdev4", 00:12:12.269 "uuid": "0e468911-d82b-5853-8ce5-73d4eea38982", 00:12:12.269 "is_configured": true, 00:12:12.269 "data_offset": 2048, 00:12:12.269 "data_size": 63488 00:12:12.269 } 00:12:12.269 ] 00:12:12.269 }' 00:12:12.269 10:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.269 10:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.529 10:57:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:12.529 10:57:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:12.787 [2024-11-15 10:57:19.457404] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:13.750 10:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:13.750 10:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.750 10:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.750 10:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.750 10:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:13.750 10:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:13.750 10:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:13.750 10:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:13.750 10:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:13.750 10:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.750 10:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.750 10:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.750 10:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.750 10:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.750 10:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.750 10:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.750 10:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.750 10:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.750 10:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.750 10:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.750 10:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.750 10:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.750 10:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.750 10:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.750 "name": "raid_bdev1", 00:12:13.750 "uuid": "231dab10-459e-4fb1-ac15-9294c366a42d", 00:12:13.750 "strip_size_kb": 0, 00:12:13.750 "state": "online", 00:12:13.750 "raid_level": "raid1", 00:12:13.750 "superblock": true, 00:12:13.750 "num_base_bdevs": 4, 00:12:13.750 "num_base_bdevs_discovered": 4, 00:12:13.750 "num_base_bdevs_operational": 4, 00:12:13.750 "base_bdevs_list": [ 00:12:13.750 { 00:12:13.750 "name": "BaseBdev1", 00:12:13.750 "uuid": "3ea4c8d2-ee87-5394-ba8d-64a33510717a", 00:12:13.750 "is_configured": true, 00:12:13.750 "data_offset": 2048, 00:12:13.750 "data_size": 63488 00:12:13.750 }, 00:12:13.750 { 00:12:13.750 "name": "BaseBdev2", 00:12:13.750 "uuid": "29fbe57b-496c-560d-a61b-5bda7d8a7b54", 00:12:13.750 "is_configured": true, 00:12:13.750 "data_offset": 2048, 00:12:13.750 "data_size": 63488 00:12:13.750 }, 00:12:13.750 { 00:12:13.750 "name": "BaseBdev3", 00:12:13.750 "uuid": "d0f58d65-e939-58b9-8842-3cb9ed174b67", 00:12:13.750 "is_configured": true, 00:12:13.750 "data_offset": 2048, 00:12:13.750 "data_size": 63488 00:12:13.750 }, 00:12:13.750 { 00:12:13.750 "name": "BaseBdev4", 00:12:13.750 "uuid": "0e468911-d82b-5853-8ce5-73d4eea38982", 00:12:13.750 "is_configured": true, 00:12:13.750 "data_offset": 2048, 00:12:13.750 "data_size": 63488 00:12:13.750 } 00:12:13.750 ] 00:12:13.750 }' 00:12:13.750 10:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.750 10:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.020 10:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:14.020 10:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.020 10:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.020 [2024-11-15 10:57:20.797212] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:14.020 [2024-11-15 10:57:20.797365] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:14.020 [2024-11-15 10:57:20.800403] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:14.020 [2024-11-15 10:57:20.800529] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.020 [2024-11-15 10:57:20.800705] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:14.020 [2024-11-15 10:57:20.800771] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:14.020 10:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.020 { 00:12:14.020 "results": [ 00:12:14.020 { 00:12:14.020 "job": "raid_bdev1", 00:12:14.020 "core_mask": "0x1", 00:12:14.020 "workload": "randrw", 00:12:14.020 "percentage": 50, 00:12:14.020 "status": "finished", 00:12:14.020 "queue_depth": 1, 00:12:14.020 "io_size": 131072, 00:12:14.020 "runtime": 1.340762, 00:12:14.020 "iops": 9996.554198284259, 00:12:14.020 "mibps": 1249.5692747855323, 00:12:14.020 "io_failed": 0, 00:12:14.020 "io_timeout": 0, 00:12:14.020 "avg_latency_us": 96.94416924842805, 00:12:14.020 "min_latency_us": 25.152838427947597, 00:12:14.020 "max_latency_us": 1681.3275109170306 00:12:14.020 } 00:12:14.020 ], 00:12:14.020 "core_count": 1 00:12:14.020 } 00:12:14.020 10:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75176 00:12:14.020 10:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 75176 ']' 00:12:14.020 10:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 75176 00:12:14.020 10:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:12:14.020 10:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:14.020 10:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75176 00:12:14.020 10:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:14.020 killing process with pid 75176 00:12:14.020 10:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:14.020 10:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75176' 00:12:14.020 10:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 75176 00:12:14.020 [2024-11-15 10:57:20.849973] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:14.020 10:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 75176 00:12:14.278 [2024-11-15 10:57:21.201772] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:15.653 10:57:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:15.653 10:57:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kqyFtXtWct 00:12:15.653 10:57:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:15.653 10:57:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:15.653 10:57:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:15.653 10:57:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:15.653 10:57:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:15.653 ************************************ 00:12:15.653 END TEST raid_read_error_test 00:12:15.653 ************************************ 00:12:15.653 10:57:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:15.653 00:12:15.653 real 0m4.732s 00:12:15.653 user 0m5.566s 00:12:15.653 sys 0m0.582s 00:12:15.653 10:57:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:15.653 10:57:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.653 10:57:22 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:15.653 10:57:22 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:15.653 10:57:22 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:15.653 10:57:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:15.653 ************************************ 00:12:15.653 START TEST raid_write_error_test 00:12:15.653 ************************************ 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 write 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.soaAH927uf 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75316 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75316 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 75316 ']' 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:15.653 10:57:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.911 [2024-11-15 10:57:22.589065] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:12:15.911 [2024-11-15 10:57:22.589280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75316 ] 00:12:15.911 [2024-11-15 10:57:22.748895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.168 [2024-11-15 10:57:22.874276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.169 [2024-11-15 10:57:23.077924] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:16.169 [2024-11-15 10:57:23.078067] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:16.736 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:16.736 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:16.736 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:16.736 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:16.736 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.736 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.736 BaseBdev1_malloc 00:12:16.736 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.736 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:16.736 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.736 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.736 true 00:12:16.736 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.736 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:16.736 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.736 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.736 [2024-11-15 10:57:23.550601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:16.736 [2024-11-15 10:57:23.550668] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.736 [2024-11-15 10:57:23.550711] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:16.736 [2024-11-15 10:57:23.550726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.736 [2024-11-15 10:57:23.553248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.736 [2024-11-15 10:57:23.553297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:16.736 BaseBdev1 00:12:16.736 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.736 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:16.736 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:16.736 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.736 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.736 BaseBdev2_malloc 00:12:16.736 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.736 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:16.736 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.736 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.736 true 00:12:16.736 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.736 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:16.737 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.737 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.737 [2024-11-15 10:57:23.615636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:16.737 [2024-11-15 10:57:23.615751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.737 [2024-11-15 10:57:23.615775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:16.737 [2024-11-15 10:57:23.615789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.737 [2024-11-15 10:57:23.618224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.737 [2024-11-15 10:57:23.618273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:16.737 BaseBdev2 00:12:16.737 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.737 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:16.737 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:16.737 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.737 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.996 BaseBdev3_malloc 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.996 true 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.996 [2024-11-15 10:57:23.693560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:16.996 [2024-11-15 10:57:23.693622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.996 [2024-11-15 10:57:23.693643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:16.996 [2024-11-15 10:57:23.693656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.996 [2024-11-15 10:57:23.696032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.996 [2024-11-15 10:57:23.696085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:16.996 BaseBdev3 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.996 BaseBdev4_malloc 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.996 true 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.996 [2024-11-15 10:57:23.756185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:16.996 [2024-11-15 10:57:23.756324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.996 [2024-11-15 10:57:23.756354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:16.996 [2024-11-15 10:57:23.756369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.996 [2024-11-15 10:57:23.758583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.996 [2024-11-15 10:57:23.758629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:16.996 BaseBdev4 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.996 [2024-11-15 10:57:23.764234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:16.996 [2024-11-15 10:57:23.766376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:16.996 [2024-11-15 10:57:23.766538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:16.996 [2024-11-15 10:57:23.766650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:16.996 [2024-11-15 10:57:23.766938] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:16.996 [2024-11-15 10:57:23.766958] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:16.996 [2024-11-15 10:57:23.767251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:16.996 [2024-11-15 10:57:23.767495] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:16.996 [2024-11-15 10:57:23.767507] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:16.996 [2024-11-15 10:57:23.767690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.996 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.996 "name": "raid_bdev1", 00:12:16.996 "uuid": "7b3909a8-a663-4928-85fb-748588f96a0d", 00:12:16.996 "strip_size_kb": 0, 00:12:16.996 "state": "online", 00:12:16.996 "raid_level": "raid1", 00:12:16.996 "superblock": true, 00:12:16.996 "num_base_bdevs": 4, 00:12:16.996 "num_base_bdevs_discovered": 4, 00:12:16.996 "num_base_bdevs_operational": 4, 00:12:16.996 "base_bdevs_list": [ 00:12:16.996 { 00:12:16.996 "name": "BaseBdev1", 00:12:16.996 "uuid": "c2754d63-ca49-513f-9160-6040dacb6529", 00:12:16.996 "is_configured": true, 00:12:16.996 "data_offset": 2048, 00:12:16.996 "data_size": 63488 00:12:16.996 }, 00:12:16.996 { 00:12:16.996 "name": "BaseBdev2", 00:12:16.996 "uuid": "f6ff2f04-4e27-59e6-81c8-dcb2966c1acb", 00:12:16.996 "is_configured": true, 00:12:16.996 "data_offset": 2048, 00:12:16.997 "data_size": 63488 00:12:16.997 }, 00:12:16.997 { 00:12:16.997 "name": "BaseBdev3", 00:12:16.997 "uuid": "699e02c7-0c6b-5492-8366-148324fa42ab", 00:12:16.997 "is_configured": true, 00:12:16.997 "data_offset": 2048, 00:12:16.997 "data_size": 63488 00:12:16.997 }, 00:12:16.997 { 00:12:16.997 "name": "BaseBdev4", 00:12:16.997 "uuid": "303e0882-9305-53c0-a99b-c1fa1bf87c5b", 00:12:16.997 "is_configured": true, 00:12:16.997 "data_offset": 2048, 00:12:16.997 "data_size": 63488 00:12:16.997 } 00:12:16.997 ] 00:12:16.997 }' 00:12:16.997 10:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.997 10:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.631 10:57:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:17.631 10:57:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:17.631 [2024-11-15 10:57:24.304766] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:18.565 10:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:18.565 10:57:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.565 10:57:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.565 [2024-11-15 10:57:25.241213] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:18.565 [2024-11-15 10:57:25.241392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:18.565 [2024-11-15 10:57:25.241710] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:12:18.565 10:57:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.565 10:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:18.565 10:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:18.565 10:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:18.565 10:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:18.565 10:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:18.565 10:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.565 10:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.565 10:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.565 10:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.565 10:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:18.565 10:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.565 10:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.565 10:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.565 10:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.565 10:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.565 10:57:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.565 10:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.565 10:57:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.565 10:57:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.565 10:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.565 "name": "raid_bdev1", 00:12:18.565 "uuid": "7b3909a8-a663-4928-85fb-748588f96a0d", 00:12:18.565 "strip_size_kb": 0, 00:12:18.565 "state": "online", 00:12:18.565 "raid_level": "raid1", 00:12:18.565 "superblock": true, 00:12:18.565 "num_base_bdevs": 4, 00:12:18.565 "num_base_bdevs_discovered": 3, 00:12:18.565 "num_base_bdevs_operational": 3, 00:12:18.565 "base_bdevs_list": [ 00:12:18.565 { 00:12:18.565 "name": null, 00:12:18.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.565 "is_configured": false, 00:12:18.565 "data_offset": 0, 00:12:18.565 "data_size": 63488 00:12:18.565 }, 00:12:18.565 { 00:12:18.565 "name": "BaseBdev2", 00:12:18.565 "uuid": "f6ff2f04-4e27-59e6-81c8-dcb2966c1acb", 00:12:18.565 "is_configured": true, 00:12:18.565 "data_offset": 2048, 00:12:18.565 "data_size": 63488 00:12:18.565 }, 00:12:18.565 { 00:12:18.565 "name": "BaseBdev3", 00:12:18.565 "uuid": "699e02c7-0c6b-5492-8366-148324fa42ab", 00:12:18.565 "is_configured": true, 00:12:18.565 "data_offset": 2048, 00:12:18.565 "data_size": 63488 00:12:18.565 }, 00:12:18.565 { 00:12:18.565 "name": "BaseBdev4", 00:12:18.565 "uuid": "303e0882-9305-53c0-a99b-c1fa1bf87c5b", 00:12:18.565 "is_configured": true, 00:12:18.565 "data_offset": 2048, 00:12:18.565 "data_size": 63488 00:12:18.565 } 00:12:18.565 ] 00:12:18.565 }' 00:12:18.565 10:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.565 10:57:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.825 10:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:18.825 10:57:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.825 10:57:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.825 [2024-11-15 10:57:25.695132] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:18.825 [2024-11-15 10:57:25.695239] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:18.825 [2024-11-15 10:57:25.698077] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.825 [2024-11-15 10:57:25.698124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.825 [2024-11-15 10:57:25.698236] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.825 [2024-11-15 10:57:25.698247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:18.825 { 00:12:18.825 "results": [ 00:12:18.825 { 00:12:18.825 "job": "raid_bdev1", 00:12:18.825 "core_mask": "0x1", 00:12:18.825 "workload": "randrw", 00:12:18.825 "percentage": 50, 00:12:18.825 "status": "finished", 00:12:18.825 "queue_depth": 1, 00:12:18.825 "io_size": 131072, 00:12:18.825 "runtime": 1.391274, 00:12:18.825 "iops": 10598.199923235825, 00:12:18.825 "mibps": 1324.774990404478, 00:12:18.825 "io_failed": 0, 00:12:18.825 "io_timeout": 0, 00:12:18.825 "avg_latency_us": 91.2266847913807, 00:12:18.825 "min_latency_us": 24.929257641921396, 00:12:18.825 "max_latency_us": 1988.9746724890829 00:12:18.825 } 00:12:18.825 ], 00:12:18.825 "core_count": 1 00:12:18.825 } 00:12:18.825 10:57:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.825 10:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75316 00:12:18.825 10:57:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 75316 ']' 00:12:18.825 10:57:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 75316 00:12:18.825 10:57:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:12:18.825 10:57:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:18.825 10:57:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75316 00:12:18.825 10:57:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:18.825 10:57:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:18.825 10:57:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75316' 00:12:18.825 killing process with pid 75316 00:12:18.825 10:57:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 75316 00:12:18.825 [2024-11-15 10:57:25.745035] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:18.825 10:57:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 75316 00:12:19.392 [2024-11-15 10:57:26.071444] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:20.332 10:57:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.soaAH927uf 00:12:20.332 10:57:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:20.332 10:57:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:20.332 10:57:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:20.332 10:57:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:20.592 10:57:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:20.592 10:57:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:20.592 10:57:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:20.592 00:12:20.592 real 0m4.784s 00:12:20.592 user 0m5.676s 00:12:20.592 sys 0m0.570s 00:12:20.592 10:57:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:20.592 10:57:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.592 ************************************ 00:12:20.592 END TEST raid_write_error_test 00:12:20.592 ************************************ 00:12:20.592 10:57:27 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:20.592 10:57:27 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:20.592 10:57:27 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:20.592 10:57:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:12:20.592 10:57:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:20.592 10:57:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:20.592 ************************************ 00:12:20.592 START TEST raid_rebuild_test 00:12:20.592 ************************************ 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false false true 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75460 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75460 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 75460 ']' 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:20.592 10:57:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.592 [2024-11-15 10:57:27.440917] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:12:20.592 [2024-11-15 10:57:27.441962] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75460 ] 00:12:20.592 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:20.592 Zero copy mechanism will not be used. 00:12:20.852 [2024-11-15 10:57:27.626869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.852 [2024-11-15 10:57:27.743413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.112 [2024-11-15 10:57:27.949842] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.112 [2024-11-15 10:57:27.950007] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.379 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:21.379 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:12:21.379 10:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:21.379 10:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:21.379 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.379 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.652 BaseBdev1_malloc 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.652 [2024-11-15 10:57:28.306274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:21.652 [2024-11-15 10:57:28.306359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.652 [2024-11-15 10:57:28.306386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:21.652 [2024-11-15 10:57:28.306400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.652 [2024-11-15 10:57:28.308533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.652 [2024-11-15 10:57:28.308651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:21.652 BaseBdev1 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.652 BaseBdev2_malloc 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.652 [2024-11-15 10:57:28.361444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:21.652 [2024-11-15 10:57:28.361512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.652 [2024-11-15 10:57:28.361535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:21.652 [2024-11-15 10:57:28.361547] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.652 [2024-11-15 10:57:28.363630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.652 [2024-11-15 10:57:28.363758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:21.652 BaseBdev2 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.652 spare_malloc 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.652 spare_delay 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.652 [2024-11-15 10:57:28.438682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:21.652 [2024-11-15 10:57:28.438751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.652 [2024-11-15 10:57:28.438775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:21.652 [2024-11-15 10:57:28.438788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.652 [2024-11-15 10:57:28.441136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.652 [2024-11-15 10:57:28.441248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:21.652 spare 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.652 [2024-11-15 10:57:28.450715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:21.652 [2024-11-15 10:57:28.452749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:21.652 [2024-11-15 10:57:28.452861] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:21.652 [2024-11-15 10:57:28.452879] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:21.652 [2024-11-15 10:57:28.453183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:21.652 [2024-11-15 10:57:28.453366] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:21.652 [2024-11-15 10:57:28.453380] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:21.652 [2024-11-15 10:57:28.453594] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.652 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.653 10:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:21.653 10:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.653 10:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.653 10:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.653 10:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.653 10:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:21.653 10:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.653 10:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.653 10:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.653 10:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.653 10:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.653 10:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.653 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.653 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.653 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.653 10:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.653 "name": "raid_bdev1", 00:12:21.653 "uuid": "eaed805c-7e7c-4826-9e0c-5131b02dff05", 00:12:21.653 "strip_size_kb": 0, 00:12:21.653 "state": "online", 00:12:21.653 "raid_level": "raid1", 00:12:21.653 "superblock": false, 00:12:21.653 "num_base_bdevs": 2, 00:12:21.653 "num_base_bdevs_discovered": 2, 00:12:21.653 "num_base_bdevs_operational": 2, 00:12:21.653 "base_bdevs_list": [ 00:12:21.653 { 00:12:21.653 "name": "BaseBdev1", 00:12:21.653 "uuid": "3b338fc5-00c3-5bc9-863b-4a391dd24d27", 00:12:21.653 "is_configured": true, 00:12:21.653 "data_offset": 0, 00:12:21.653 "data_size": 65536 00:12:21.653 }, 00:12:21.653 { 00:12:21.653 "name": "BaseBdev2", 00:12:21.653 "uuid": "3073581f-9b0d-59cf-85ce-9dfa843c5213", 00:12:21.653 "is_configured": true, 00:12:21.653 "data_offset": 0, 00:12:21.653 "data_size": 65536 00:12:21.653 } 00:12:21.653 ] 00:12:21.653 }' 00:12:21.653 10:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.653 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.222 10:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:22.222 10:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:22.222 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.222 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.222 [2024-11-15 10:57:28.938269] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:22.222 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.222 10:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:22.222 10:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.222 10:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:22.222 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.222 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.222 10:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.222 10:57:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:22.222 10:57:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:22.222 10:57:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:22.222 10:57:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:22.222 10:57:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:22.222 10:57:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:22.222 10:57:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:22.222 10:57:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:22.222 10:57:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:22.222 10:57:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:22.222 10:57:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:22.222 10:57:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:22.222 10:57:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:22.222 10:57:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:22.483 [2024-11-15 10:57:29.205541] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:22.483 /dev/nbd0 00:12:22.483 10:57:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:22.483 10:57:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:22.483 10:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:22.483 10:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:12:22.483 10:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:22.483 10:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:22.483 10:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:22.483 10:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:12:22.483 10:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:22.483 10:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:22.483 10:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:22.483 1+0 records in 00:12:22.483 1+0 records out 00:12:22.483 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055582 s, 7.4 MB/s 00:12:22.483 10:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.483 10:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:12:22.483 10:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.483 10:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:22.483 10:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:12:22.483 10:57:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:22.483 10:57:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:22.483 10:57:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:22.483 10:57:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:22.483 10:57:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:27.758 65536+0 records in 00:12:27.758 65536+0 records out 00:12:27.758 33554432 bytes (34 MB, 32 MiB) copied, 4.88862 s, 6.9 MB/s 00:12:27.758 10:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:27.758 10:57:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:27.758 10:57:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:27.758 10:57:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:27.758 10:57:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:27.758 10:57:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.758 10:57:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:27.758 [2024-11-15 10:57:34.389234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.758 10:57:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:27.759 10:57:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:27.759 10:57:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:27.759 10:57:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:27.759 10:57:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:27.759 10:57:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:27.759 10:57:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:27.759 10:57:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:27.759 10:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:27.759 10:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.759 10:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.759 [2024-11-15 10:57:34.423139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:27.759 10:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.759 10:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:27.759 10:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.759 10:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.759 10:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.759 10:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.759 10:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:27.759 10:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.759 10:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.759 10:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.759 10:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.759 10:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.759 10:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.759 10:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.759 10:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.759 10:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.759 10:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.759 "name": "raid_bdev1", 00:12:27.759 "uuid": "eaed805c-7e7c-4826-9e0c-5131b02dff05", 00:12:27.759 "strip_size_kb": 0, 00:12:27.759 "state": "online", 00:12:27.759 "raid_level": "raid1", 00:12:27.759 "superblock": false, 00:12:27.759 "num_base_bdevs": 2, 00:12:27.759 "num_base_bdevs_discovered": 1, 00:12:27.759 "num_base_bdevs_operational": 1, 00:12:27.759 "base_bdevs_list": [ 00:12:27.759 { 00:12:27.759 "name": null, 00:12:27.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.759 "is_configured": false, 00:12:27.759 "data_offset": 0, 00:12:27.759 "data_size": 65536 00:12:27.759 }, 00:12:27.759 { 00:12:27.759 "name": "BaseBdev2", 00:12:27.759 "uuid": "3073581f-9b0d-59cf-85ce-9dfa843c5213", 00:12:27.759 "is_configured": true, 00:12:27.759 "data_offset": 0, 00:12:27.759 "data_size": 65536 00:12:27.759 } 00:12:27.759 ] 00:12:27.759 }' 00:12:27.759 10:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.759 10:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.017 10:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:28.017 10:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.017 10:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.017 [2024-11-15 10:57:34.898401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:28.017 [2024-11-15 10:57:34.917032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:28.017 10:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.017 [2024-11-15 10:57:34.919213] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:28.017 10:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:29.398 10:57:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:29.398 10:57:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.398 10:57:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:29.398 10:57:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:29.398 10:57:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.398 10:57:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.398 10:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.398 10:57:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.398 10:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.398 10:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.398 10:57:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.398 "name": "raid_bdev1", 00:12:29.398 "uuid": "eaed805c-7e7c-4826-9e0c-5131b02dff05", 00:12:29.398 "strip_size_kb": 0, 00:12:29.398 "state": "online", 00:12:29.398 "raid_level": "raid1", 00:12:29.398 "superblock": false, 00:12:29.398 "num_base_bdevs": 2, 00:12:29.398 "num_base_bdevs_discovered": 2, 00:12:29.398 "num_base_bdevs_operational": 2, 00:12:29.398 "process": { 00:12:29.398 "type": "rebuild", 00:12:29.398 "target": "spare", 00:12:29.398 "progress": { 00:12:29.398 "blocks": 20480, 00:12:29.398 "percent": 31 00:12:29.398 } 00:12:29.398 }, 00:12:29.398 "base_bdevs_list": [ 00:12:29.398 { 00:12:29.398 "name": "spare", 00:12:29.398 "uuid": "91c0412e-2a91-5006-82cd-103b2bf3810b", 00:12:29.398 "is_configured": true, 00:12:29.398 "data_offset": 0, 00:12:29.398 "data_size": 65536 00:12:29.398 }, 00:12:29.398 { 00:12:29.398 "name": "BaseBdev2", 00:12:29.398 "uuid": "3073581f-9b0d-59cf-85ce-9dfa843c5213", 00:12:29.398 "is_configured": true, 00:12:29.398 "data_offset": 0, 00:12:29.398 "data_size": 65536 00:12:29.398 } 00:12:29.398 ] 00:12:29.398 }' 00:12:29.398 10:57:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.398 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:29.398 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.398 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:29.398 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:29.398 10:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.398 10:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.398 [2024-11-15 10:57:36.062362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:29.398 [2024-11-15 10:57:36.125546] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:29.398 [2024-11-15 10:57:36.125755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.398 [2024-11-15 10:57:36.125778] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:29.398 [2024-11-15 10:57:36.125796] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:29.398 10:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.398 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:29.398 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.398 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.398 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.398 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.398 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:29.398 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.398 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.398 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.398 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.398 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.398 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.398 10:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.398 10:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.398 10:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.398 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.398 "name": "raid_bdev1", 00:12:29.398 "uuid": "eaed805c-7e7c-4826-9e0c-5131b02dff05", 00:12:29.398 "strip_size_kb": 0, 00:12:29.399 "state": "online", 00:12:29.399 "raid_level": "raid1", 00:12:29.399 "superblock": false, 00:12:29.399 "num_base_bdevs": 2, 00:12:29.399 "num_base_bdevs_discovered": 1, 00:12:29.399 "num_base_bdevs_operational": 1, 00:12:29.399 "base_bdevs_list": [ 00:12:29.399 { 00:12:29.399 "name": null, 00:12:29.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.399 "is_configured": false, 00:12:29.399 "data_offset": 0, 00:12:29.399 "data_size": 65536 00:12:29.399 }, 00:12:29.399 { 00:12:29.399 "name": "BaseBdev2", 00:12:29.399 "uuid": "3073581f-9b0d-59cf-85ce-9dfa843c5213", 00:12:29.399 "is_configured": true, 00:12:29.399 "data_offset": 0, 00:12:29.399 "data_size": 65536 00:12:29.399 } 00:12:29.399 ] 00:12:29.399 }' 00:12:29.399 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.399 10:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.967 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:29.967 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.967 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:29.967 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:29.967 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.967 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.967 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.967 10:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.967 10:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.967 10:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.967 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.967 "name": "raid_bdev1", 00:12:29.967 "uuid": "eaed805c-7e7c-4826-9e0c-5131b02dff05", 00:12:29.967 "strip_size_kb": 0, 00:12:29.967 "state": "online", 00:12:29.967 "raid_level": "raid1", 00:12:29.967 "superblock": false, 00:12:29.967 "num_base_bdevs": 2, 00:12:29.967 "num_base_bdevs_discovered": 1, 00:12:29.967 "num_base_bdevs_operational": 1, 00:12:29.967 "base_bdevs_list": [ 00:12:29.967 { 00:12:29.967 "name": null, 00:12:29.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.967 "is_configured": false, 00:12:29.967 "data_offset": 0, 00:12:29.967 "data_size": 65536 00:12:29.967 }, 00:12:29.967 { 00:12:29.967 "name": "BaseBdev2", 00:12:29.967 "uuid": "3073581f-9b0d-59cf-85ce-9dfa843c5213", 00:12:29.967 "is_configured": true, 00:12:29.967 "data_offset": 0, 00:12:29.967 "data_size": 65536 00:12:29.967 } 00:12:29.967 ] 00:12:29.967 }' 00:12:29.967 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.967 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:29.967 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.967 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:29.967 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:29.967 10:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.967 10:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.967 [2024-11-15 10:57:36.722062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:29.967 [2024-11-15 10:57:36.740375] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:29.967 10:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.967 10:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:29.967 [2024-11-15 10:57:36.742504] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:30.923 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:30.923 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.923 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:30.923 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:30.923 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.923 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.923 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.923 10:57:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.923 10:57:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.923 10:57:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.923 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.923 "name": "raid_bdev1", 00:12:30.923 "uuid": "eaed805c-7e7c-4826-9e0c-5131b02dff05", 00:12:30.923 "strip_size_kb": 0, 00:12:30.923 "state": "online", 00:12:30.923 "raid_level": "raid1", 00:12:30.923 "superblock": false, 00:12:30.923 "num_base_bdevs": 2, 00:12:30.923 "num_base_bdevs_discovered": 2, 00:12:30.923 "num_base_bdevs_operational": 2, 00:12:30.923 "process": { 00:12:30.923 "type": "rebuild", 00:12:30.923 "target": "spare", 00:12:30.923 "progress": { 00:12:30.923 "blocks": 20480, 00:12:30.923 "percent": 31 00:12:30.923 } 00:12:30.923 }, 00:12:30.923 "base_bdevs_list": [ 00:12:30.923 { 00:12:30.923 "name": "spare", 00:12:30.923 "uuid": "91c0412e-2a91-5006-82cd-103b2bf3810b", 00:12:30.923 "is_configured": true, 00:12:30.923 "data_offset": 0, 00:12:30.923 "data_size": 65536 00:12:30.923 }, 00:12:30.923 { 00:12:30.923 "name": "BaseBdev2", 00:12:30.923 "uuid": "3073581f-9b0d-59cf-85ce-9dfa843c5213", 00:12:30.923 "is_configured": true, 00:12:30.923 "data_offset": 0, 00:12:30.923 "data_size": 65536 00:12:30.923 } 00:12:30.923 ] 00:12:30.923 }' 00:12:30.923 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.923 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:30.923 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.183 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:31.183 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:31.183 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:31.183 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:31.183 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:31.183 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=376 00:12:31.183 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:31.183 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:31.183 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.183 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:31.183 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:31.183 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.183 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.183 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.183 10:57:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.183 10:57:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.183 10:57:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.183 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.183 "name": "raid_bdev1", 00:12:31.183 "uuid": "eaed805c-7e7c-4826-9e0c-5131b02dff05", 00:12:31.183 "strip_size_kb": 0, 00:12:31.183 "state": "online", 00:12:31.183 "raid_level": "raid1", 00:12:31.183 "superblock": false, 00:12:31.183 "num_base_bdevs": 2, 00:12:31.183 "num_base_bdevs_discovered": 2, 00:12:31.183 "num_base_bdevs_operational": 2, 00:12:31.183 "process": { 00:12:31.183 "type": "rebuild", 00:12:31.183 "target": "spare", 00:12:31.183 "progress": { 00:12:31.183 "blocks": 22528, 00:12:31.183 "percent": 34 00:12:31.183 } 00:12:31.183 }, 00:12:31.183 "base_bdevs_list": [ 00:12:31.183 { 00:12:31.183 "name": "spare", 00:12:31.183 "uuid": "91c0412e-2a91-5006-82cd-103b2bf3810b", 00:12:31.183 "is_configured": true, 00:12:31.183 "data_offset": 0, 00:12:31.183 "data_size": 65536 00:12:31.183 }, 00:12:31.183 { 00:12:31.183 "name": "BaseBdev2", 00:12:31.183 "uuid": "3073581f-9b0d-59cf-85ce-9dfa843c5213", 00:12:31.183 "is_configured": true, 00:12:31.183 "data_offset": 0, 00:12:31.183 "data_size": 65536 00:12:31.183 } 00:12:31.183 ] 00:12:31.183 }' 00:12:31.183 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.183 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:31.183 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.183 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:31.183 10:57:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:32.122 10:57:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:32.122 10:57:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:32.122 10:57:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.122 10:57:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:32.122 10:57:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:32.122 10:57:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.122 10:57:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.122 10:57:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.122 10:57:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.122 10:57:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.122 10:57:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.381 10:57:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.381 "name": "raid_bdev1", 00:12:32.381 "uuid": "eaed805c-7e7c-4826-9e0c-5131b02dff05", 00:12:32.381 "strip_size_kb": 0, 00:12:32.381 "state": "online", 00:12:32.381 "raid_level": "raid1", 00:12:32.381 "superblock": false, 00:12:32.381 "num_base_bdevs": 2, 00:12:32.381 "num_base_bdevs_discovered": 2, 00:12:32.381 "num_base_bdevs_operational": 2, 00:12:32.381 "process": { 00:12:32.381 "type": "rebuild", 00:12:32.381 "target": "spare", 00:12:32.381 "progress": { 00:12:32.381 "blocks": 45056, 00:12:32.381 "percent": 68 00:12:32.381 } 00:12:32.381 }, 00:12:32.381 "base_bdevs_list": [ 00:12:32.381 { 00:12:32.381 "name": "spare", 00:12:32.381 "uuid": "91c0412e-2a91-5006-82cd-103b2bf3810b", 00:12:32.381 "is_configured": true, 00:12:32.381 "data_offset": 0, 00:12:32.381 "data_size": 65536 00:12:32.381 }, 00:12:32.381 { 00:12:32.381 "name": "BaseBdev2", 00:12:32.381 "uuid": "3073581f-9b0d-59cf-85ce-9dfa843c5213", 00:12:32.381 "is_configured": true, 00:12:32.381 "data_offset": 0, 00:12:32.381 "data_size": 65536 00:12:32.381 } 00:12:32.381 ] 00:12:32.381 }' 00:12:32.381 10:57:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.381 10:57:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:32.381 10:57:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.381 10:57:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:32.381 10:57:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:33.319 [2024-11-15 10:57:39.958531] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:33.319 [2024-11-15 10:57:39.958634] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:33.319 [2024-11-15 10:57:39.958707] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.319 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:33.319 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:33.319 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.319 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:33.319 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:33.319 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.319 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.319 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.319 10:57:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.319 10:57:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.319 10:57:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.319 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.319 "name": "raid_bdev1", 00:12:33.319 "uuid": "eaed805c-7e7c-4826-9e0c-5131b02dff05", 00:12:33.319 "strip_size_kb": 0, 00:12:33.319 "state": "online", 00:12:33.319 "raid_level": "raid1", 00:12:33.319 "superblock": false, 00:12:33.319 "num_base_bdevs": 2, 00:12:33.319 "num_base_bdevs_discovered": 2, 00:12:33.319 "num_base_bdevs_operational": 2, 00:12:33.319 "base_bdevs_list": [ 00:12:33.319 { 00:12:33.319 "name": "spare", 00:12:33.319 "uuid": "91c0412e-2a91-5006-82cd-103b2bf3810b", 00:12:33.319 "is_configured": true, 00:12:33.319 "data_offset": 0, 00:12:33.319 "data_size": 65536 00:12:33.319 }, 00:12:33.319 { 00:12:33.319 "name": "BaseBdev2", 00:12:33.319 "uuid": "3073581f-9b0d-59cf-85ce-9dfa843c5213", 00:12:33.319 "is_configured": true, 00:12:33.319 "data_offset": 0, 00:12:33.319 "data_size": 65536 00:12:33.319 } 00:12:33.319 ] 00:12:33.319 }' 00:12:33.319 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.579 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:33.579 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.579 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:33.579 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:33.579 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:33.579 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.579 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:33.579 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:33.579 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.579 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.579 10:57:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.579 10:57:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.579 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.579 10:57:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.579 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.579 "name": "raid_bdev1", 00:12:33.579 "uuid": "eaed805c-7e7c-4826-9e0c-5131b02dff05", 00:12:33.579 "strip_size_kb": 0, 00:12:33.579 "state": "online", 00:12:33.580 "raid_level": "raid1", 00:12:33.580 "superblock": false, 00:12:33.580 "num_base_bdevs": 2, 00:12:33.580 "num_base_bdevs_discovered": 2, 00:12:33.580 "num_base_bdevs_operational": 2, 00:12:33.580 "base_bdevs_list": [ 00:12:33.580 { 00:12:33.580 "name": "spare", 00:12:33.580 "uuid": "91c0412e-2a91-5006-82cd-103b2bf3810b", 00:12:33.580 "is_configured": true, 00:12:33.580 "data_offset": 0, 00:12:33.580 "data_size": 65536 00:12:33.580 }, 00:12:33.580 { 00:12:33.580 "name": "BaseBdev2", 00:12:33.580 "uuid": "3073581f-9b0d-59cf-85ce-9dfa843c5213", 00:12:33.580 "is_configured": true, 00:12:33.580 "data_offset": 0, 00:12:33.580 "data_size": 65536 00:12:33.580 } 00:12:33.580 ] 00:12:33.580 }' 00:12:33.580 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.580 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:33.580 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.580 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:33.580 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:33.580 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.580 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.580 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.580 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.580 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:33.580 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.580 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.580 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.580 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.580 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.580 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.580 10:57:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.580 10:57:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.580 10:57:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.580 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.580 "name": "raid_bdev1", 00:12:33.580 "uuid": "eaed805c-7e7c-4826-9e0c-5131b02dff05", 00:12:33.580 "strip_size_kb": 0, 00:12:33.580 "state": "online", 00:12:33.580 "raid_level": "raid1", 00:12:33.580 "superblock": false, 00:12:33.580 "num_base_bdevs": 2, 00:12:33.580 "num_base_bdevs_discovered": 2, 00:12:33.580 "num_base_bdevs_operational": 2, 00:12:33.580 "base_bdevs_list": [ 00:12:33.580 { 00:12:33.580 "name": "spare", 00:12:33.580 "uuid": "91c0412e-2a91-5006-82cd-103b2bf3810b", 00:12:33.580 "is_configured": true, 00:12:33.580 "data_offset": 0, 00:12:33.580 "data_size": 65536 00:12:33.580 }, 00:12:33.580 { 00:12:33.580 "name": "BaseBdev2", 00:12:33.580 "uuid": "3073581f-9b0d-59cf-85ce-9dfa843c5213", 00:12:33.580 "is_configured": true, 00:12:33.580 "data_offset": 0, 00:12:33.580 "data_size": 65536 00:12:33.580 } 00:12:33.580 ] 00:12:33.580 }' 00:12:33.580 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.580 10:57:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.182 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:34.182 10:57:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.182 10:57:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.182 [2024-11-15 10:57:40.887820] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:34.182 [2024-11-15 10:57:40.887947] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:34.182 [2024-11-15 10:57:40.888063] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:34.182 [2024-11-15 10:57:40.888148] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:34.182 [2024-11-15 10:57:40.888161] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:34.182 10:57:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.182 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.182 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:34.182 10:57:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.182 10:57:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.182 10:57:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.182 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:34.182 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:34.182 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:34.182 10:57:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:34.182 10:57:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:34.182 10:57:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:34.182 10:57:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:34.182 10:57:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:34.182 10:57:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:34.182 10:57:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:34.182 10:57:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:34.182 10:57:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:34.182 10:57:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:34.442 /dev/nbd0 00:12:34.442 10:57:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:34.442 10:57:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:34.442 10:57:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:34.442 10:57:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:12:34.442 10:57:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:34.442 10:57:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:34.442 10:57:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:34.442 10:57:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:12:34.442 10:57:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:34.442 10:57:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:34.442 10:57:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:34.442 1+0 records in 00:12:34.442 1+0 records out 00:12:34.442 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449833 s, 9.1 MB/s 00:12:34.442 10:57:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.442 10:57:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:12:34.442 10:57:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.442 10:57:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:34.442 10:57:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:12:34.442 10:57:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:34.442 10:57:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:34.442 10:57:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:34.702 /dev/nbd1 00:12:34.702 10:57:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:34.702 10:57:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:34.702 10:57:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:34.702 10:57:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:12:34.702 10:57:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:34.702 10:57:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:34.702 10:57:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:34.702 10:57:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:12:34.702 10:57:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:34.702 10:57:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:34.702 10:57:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:34.702 1+0 records in 00:12:34.702 1+0 records out 00:12:34.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539078 s, 7.6 MB/s 00:12:34.702 10:57:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.702 10:57:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:12:34.702 10:57:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.702 10:57:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:34.702 10:57:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:12:34.702 10:57:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:34.702 10:57:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:34.702 10:57:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:34.961 10:57:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:34.961 10:57:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:34.961 10:57:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:34.961 10:57:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:34.961 10:57:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:34.961 10:57:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.961 10:57:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:35.220 10:57:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:35.221 10:57:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:35.221 10:57:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:35.221 10:57:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.221 10:57:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.221 10:57:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:35.221 10:57:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:35.221 10:57:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.221 10:57:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.221 10:57:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:35.479 10:57:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:35.479 10:57:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:35.479 10:57:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:35.479 10:57:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.479 10:57:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.479 10:57:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:35.479 10:57:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:35.479 10:57:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.479 10:57:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:35.479 10:57:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75460 00:12:35.479 10:57:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 75460 ']' 00:12:35.479 10:57:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 75460 00:12:35.479 10:57:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:12:35.479 10:57:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:35.479 10:57:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75460 00:12:35.479 10:57:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:35.479 10:57:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:35.479 killing process with pid 75460 00:12:35.479 10:57:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75460' 00:12:35.479 10:57:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 75460 00:12:35.479 Received shutdown signal, test time was about 60.000000 seconds 00:12:35.479 00:12:35.479 Latency(us) 00:12:35.479 [2024-11-15T10:57:42.407Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.479 [2024-11-15T10:57:42.407Z] =================================================================================================================== 00:12:35.479 [2024-11-15T10:57:42.407Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:35.479 [2024-11-15 10:57:42.239916] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:35.479 10:57:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 75460 00:12:35.737 [2024-11-15 10:57:42.573532] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:37.117 00:12:37.117 real 0m16.454s 00:12:37.117 user 0m18.023s 00:12:37.117 sys 0m3.292s 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:37.117 ************************************ 00:12:37.117 END TEST raid_rebuild_test 00:12:37.117 ************************************ 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.117 10:57:43 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:37.117 10:57:43 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:12:37.117 10:57:43 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:37.117 10:57:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:37.117 ************************************ 00:12:37.117 START TEST raid_rebuild_test_sb 00:12:37.117 ************************************ 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75891 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75891 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 75891 ']' 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:37.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:37.117 10:57:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.117 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:37.117 Zero copy mechanism will not be used. 00:12:37.117 [2024-11-15 10:57:43.952962] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:12:37.117 [2024-11-15 10:57:43.953082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75891 ] 00:12:37.376 [2024-11-15 10:57:44.130277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.376 [2024-11-15 10:57:44.266243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.636 [2024-11-15 10:57:44.492985] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:37.636 [2024-11-15 10:57:44.493059] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.206 10:57:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:38.206 10:57:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:12:38.206 10:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:38.206 10:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:38.206 10:57:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.206 10:57:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.206 BaseBdev1_malloc 00:12:38.206 10:57:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.206 10:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:38.206 10:57:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.206 10:57:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.206 [2024-11-15 10:57:44.907591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:38.206 [2024-11-15 10:57:44.907685] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.206 [2024-11-15 10:57:44.907714] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:38.206 [2024-11-15 10:57:44.907728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.206 [2024-11-15 10:57:44.910016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.206 [2024-11-15 10:57:44.910061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:38.206 BaseBdev1 00:12:38.206 10:57:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.206 10:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:38.206 10:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:38.206 10:57:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.206 10:57:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.207 BaseBdev2_malloc 00:12:38.207 10:57:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.207 10:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:38.207 10:57:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.207 10:57:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.207 [2024-11-15 10:57:44.964212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:38.207 [2024-11-15 10:57:44.964306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.207 [2024-11-15 10:57:44.964347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:38.207 [2024-11-15 10:57:44.964365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.207 [2024-11-15 10:57:44.966686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.207 [2024-11-15 10:57:44.966735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:38.207 BaseBdev2 00:12:38.207 10:57:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.207 10:57:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:38.207 10:57:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.207 10:57:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.207 spare_malloc 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.207 spare_delay 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.207 [2024-11-15 10:57:45.044219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:38.207 [2024-11-15 10:57:45.044311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.207 [2024-11-15 10:57:45.044338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:38.207 [2024-11-15 10:57:45.044352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.207 [2024-11-15 10:57:45.046698] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.207 [2024-11-15 10:57:45.046742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:38.207 spare 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.207 [2024-11-15 10:57:45.056259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:38.207 [2024-11-15 10:57:45.058109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:38.207 [2024-11-15 10:57:45.058343] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:38.207 [2024-11-15 10:57:45.058364] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:38.207 [2024-11-15 10:57:45.058638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:38.207 [2024-11-15 10:57:45.058820] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:38.207 [2024-11-15 10:57:45.058836] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:38.207 [2024-11-15 10:57:45.059017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.207 "name": "raid_bdev1", 00:12:38.207 "uuid": "a6db4f85-9992-46b0-83bb-37ec442c3395", 00:12:38.207 "strip_size_kb": 0, 00:12:38.207 "state": "online", 00:12:38.207 "raid_level": "raid1", 00:12:38.207 "superblock": true, 00:12:38.207 "num_base_bdevs": 2, 00:12:38.207 "num_base_bdevs_discovered": 2, 00:12:38.207 "num_base_bdevs_operational": 2, 00:12:38.207 "base_bdevs_list": [ 00:12:38.207 { 00:12:38.207 "name": "BaseBdev1", 00:12:38.207 "uuid": "a9b574d6-b072-5cf1-a012-f69c138d830c", 00:12:38.207 "is_configured": true, 00:12:38.207 "data_offset": 2048, 00:12:38.207 "data_size": 63488 00:12:38.207 }, 00:12:38.207 { 00:12:38.207 "name": "BaseBdev2", 00:12:38.207 "uuid": "bf299635-cd88-5354-badb-e744f2f52729", 00:12:38.207 "is_configured": true, 00:12:38.207 "data_offset": 2048, 00:12:38.207 "data_size": 63488 00:12:38.207 } 00:12:38.207 ] 00:12:38.207 }' 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.207 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.777 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:38.777 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:38.777 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.777 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.777 [2024-11-15 10:57:45.527811] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:38.777 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.777 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:38.777 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:38.777 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.777 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.777 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.777 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.777 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:38.777 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:38.777 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:38.777 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:38.777 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:38.777 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:38.777 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:38.778 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:38.778 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:38.778 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:38.778 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:38.778 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:38.778 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:38.778 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:39.037 [2024-11-15 10:57:45.799134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:39.037 /dev/nbd0 00:12:39.037 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:39.037 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:39.037 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:39.037 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:12:39.037 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:39.037 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:39.037 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:39.037 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:12:39.037 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:39.037 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:39.037 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.037 1+0 records in 00:12:39.037 1+0 records out 00:12:39.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000475436 s, 8.6 MB/s 00:12:39.037 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.037 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:12:39.037 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.037 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:39.037 10:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:12:39.037 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:39.037 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:39.037 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:39.037 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:39.038 10:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:44.324 63488+0 records in 00:12:44.324 63488+0 records out 00:12:44.324 32505856 bytes (33 MB, 31 MiB) copied, 4.48369 s, 7.2 MB/s 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:44.324 [2024-11-15 10:57:50.573031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.324 [2024-11-15 10:57:50.585145] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.324 "name": "raid_bdev1", 00:12:44.324 "uuid": "a6db4f85-9992-46b0-83bb-37ec442c3395", 00:12:44.324 "strip_size_kb": 0, 00:12:44.324 "state": "online", 00:12:44.324 "raid_level": "raid1", 00:12:44.324 "superblock": true, 00:12:44.324 "num_base_bdevs": 2, 00:12:44.324 "num_base_bdevs_discovered": 1, 00:12:44.324 "num_base_bdevs_operational": 1, 00:12:44.324 "base_bdevs_list": [ 00:12:44.324 { 00:12:44.324 "name": null, 00:12:44.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.324 "is_configured": false, 00:12:44.324 "data_offset": 0, 00:12:44.324 "data_size": 63488 00:12:44.324 }, 00:12:44.324 { 00:12:44.324 "name": "BaseBdev2", 00:12:44.324 "uuid": "bf299635-cd88-5354-badb-e744f2f52729", 00:12:44.324 "is_configured": true, 00:12:44.324 "data_offset": 2048, 00:12:44.324 "data_size": 63488 00:12:44.324 } 00:12:44.324 ] 00:12:44.324 }' 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.324 10:57:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.324 10:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:44.324 10:57:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.324 10:57:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.324 [2024-11-15 10:57:51.044406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:44.324 [2024-11-15 10:57:51.063276] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:44.324 10:57:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.324 [2024-11-15 10:57:51.065495] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:44.324 10:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:45.263 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:45.263 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.263 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:45.263 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:45.263 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.263 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.263 10:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.263 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.263 10:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.263 10:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.263 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.263 "name": "raid_bdev1", 00:12:45.263 "uuid": "a6db4f85-9992-46b0-83bb-37ec442c3395", 00:12:45.263 "strip_size_kb": 0, 00:12:45.263 "state": "online", 00:12:45.263 "raid_level": "raid1", 00:12:45.263 "superblock": true, 00:12:45.263 "num_base_bdevs": 2, 00:12:45.263 "num_base_bdevs_discovered": 2, 00:12:45.263 "num_base_bdevs_operational": 2, 00:12:45.263 "process": { 00:12:45.263 "type": "rebuild", 00:12:45.263 "target": "spare", 00:12:45.263 "progress": { 00:12:45.263 "blocks": 20480, 00:12:45.263 "percent": 32 00:12:45.263 } 00:12:45.263 }, 00:12:45.263 "base_bdevs_list": [ 00:12:45.263 { 00:12:45.263 "name": "spare", 00:12:45.263 "uuid": "fa48a2e1-b729-5cc4-a6e4-4d8cac6ca7af", 00:12:45.263 "is_configured": true, 00:12:45.263 "data_offset": 2048, 00:12:45.263 "data_size": 63488 00:12:45.263 }, 00:12:45.263 { 00:12:45.263 "name": "BaseBdev2", 00:12:45.263 "uuid": "bf299635-cd88-5354-badb-e744f2f52729", 00:12:45.263 "is_configured": true, 00:12:45.263 "data_offset": 2048, 00:12:45.263 "data_size": 63488 00:12:45.263 } 00:12:45.263 ] 00:12:45.263 }' 00:12:45.263 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.263 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:45.263 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.523 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:45.523 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:45.523 10:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.523 10:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.523 [2024-11-15 10:57:52.232371] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:45.523 [2024-11-15 10:57:52.271355] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:45.523 [2024-11-15 10:57:52.271445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.523 [2024-11-15 10:57:52.271463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:45.523 [2024-11-15 10:57:52.271474] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:45.523 10:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.523 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:45.523 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.523 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.523 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.523 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.523 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:45.523 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.523 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.523 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.523 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.523 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.523 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.523 10:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.523 10:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.523 10:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.523 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.523 "name": "raid_bdev1", 00:12:45.523 "uuid": "a6db4f85-9992-46b0-83bb-37ec442c3395", 00:12:45.523 "strip_size_kb": 0, 00:12:45.523 "state": "online", 00:12:45.523 "raid_level": "raid1", 00:12:45.523 "superblock": true, 00:12:45.523 "num_base_bdevs": 2, 00:12:45.523 "num_base_bdevs_discovered": 1, 00:12:45.523 "num_base_bdevs_operational": 1, 00:12:45.523 "base_bdevs_list": [ 00:12:45.523 { 00:12:45.523 "name": null, 00:12:45.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.523 "is_configured": false, 00:12:45.523 "data_offset": 0, 00:12:45.523 "data_size": 63488 00:12:45.523 }, 00:12:45.523 { 00:12:45.523 "name": "BaseBdev2", 00:12:45.523 "uuid": "bf299635-cd88-5354-badb-e744f2f52729", 00:12:45.523 "is_configured": true, 00:12:45.523 "data_offset": 2048, 00:12:45.523 "data_size": 63488 00:12:45.523 } 00:12:45.523 ] 00:12:45.523 }' 00:12:45.523 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.523 10:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.092 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:46.092 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.092 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:46.092 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:46.092 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.092 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.092 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.092 10:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.092 10:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.092 10:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.092 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.092 "name": "raid_bdev1", 00:12:46.092 "uuid": "a6db4f85-9992-46b0-83bb-37ec442c3395", 00:12:46.092 "strip_size_kb": 0, 00:12:46.092 "state": "online", 00:12:46.092 "raid_level": "raid1", 00:12:46.092 "superblock": true, 00:12:46.092 "num_base_bdevs": 2, 00:12:46.092 "num_base_bdevs_discovered": 1, 00:12:46.092 "num_base_bdevs_operational": 1, 00:12:46.092 "base_bdevs_list": [ 00:12:46.092 { 00:12:46.092 "name": null, 00:12:46.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.092 "is_configured": false, 00:12:46.092 "data_offset": 0, 00:12:46.092 "data_size": 63488 00:12:46.092 }, 00:12:46.092 { 00:12:46.092 "name": "BaseBdev2", 00:12:46.092 "uuid": "bf299635-cd88-5354-badb-e744f2f52729", 00:12:46.092 "is_configured": true, 00:12:46.092 "data_offset": 2048, 00:12:46.092 "data_size": 63488 00:12:46.092 } 00:12:46.092 ] 00:12:46.092 }' 00:12:46.092 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.093 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:46.093 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.093 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:46.093 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:46.093 10:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.093 10:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.093 [2024-11-15 10:57:52.887281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:46.093 [2024-11-15 10:57:52.906041] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:46.093 10:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.093 10:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:46.093 [2024-11-15 10:57:52.908169] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:47.031 10:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:47.031 10:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.031 10:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:47.031 10:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:47.031 10:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.031 10:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.031 10:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.031 10:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.031 10:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.031 10:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.292 10:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.292 "name": "raid_bdev1", 00:12:47.292 "uuid": "a6db4f85-9992-46b0-83bb-37ec442c3395", 00:12:47.292 "strip_size_kb": 0, 00:12:47.292 "state": "online", 00:12:47.292 "raid_level": "raid1", 00:12:47.292 "superblock": true, 00:12:47.292 "num_base_bdevs": 2, 00:12:47.292 "num_base_bdevs_discovered": 2, 00:12:47.292 "num_base_bdevs_operational": 2, 00:12:47.292 "process": { 00:12:47.292 "type": "rebuild", 00:12:47.292 "target": "spare", 00:12:47.292 "progress": { 00:12:47.292 "blocks": 20480, 00:12:47.292 "percent": 32 00:12:47.292 } 00:12:47.292 }, 00:12:47.292 "base_bdevs_list": [ 00:12:47.292 { 00:12:47.292 "name": "spare", 00:12:47.292 "uuid": "fa48a2e1-b729-5cc4-a6e4-4d8cac6ca7af", 00:12:47.292 "is_configured": true, 00:12:47.292 "data_offset": 2048, 00:12:47.292 "data_size": 63488 00:12:47.292 }, 00:12:47.292 { 00:12:47.292 "name": "BaseBdev2", 00:12:47.292 "uuid": "bf299635-cd88-5354-badb-e744f2f52729", 00:12:47.292 "is_configured": true, 00:12:47.292 "data_offset": 2048, 00:12:47.292 "data_size": 63488 00:12:47.292 } 00:12:47.292 ] 00:12:47.292 }' 00:12:47.292 10:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.292 10:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:47.292 10:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.292 10:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:47.292 10:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:47.292 10:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:47.292 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:47.292 10:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:47.292 10:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:47.292 10:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:47.292 10:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=393 00:12:47.292 10:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:47.292 10:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:47.292 10:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.292 10:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:47.292 10:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:47.292 10:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.292 10:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.292 10:57:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.292 10:57:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.292 10:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.292 10:57:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.292 10:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.292 "name": "raid_bdev1", 00:12:47.292 "uuid": "a6db4f85-9992-46b0-83bb-37ec442c3395", 00:12:47.292 "strip_size_kb": 0, 00:12:47.292 "state": "online", 00:12:47.292 "raid_level": "raid1", 00:12:47.292 "superblock": true, 00:12:47.292 "num_base_bdevs": 2, 00:12:47.292 "num_base_bdevs_discovered": 2, 00:12:47.292 "num_base_bdevs_operational": 2, 00:12:47.292 "process": { 00:12:47.292 "type": "rebuild", 00:12:47.292 "target": "spare", 00:12:47.292 "progress": { 00:12:47.292 "blocks": 22528, 00:12:47.292 "percent": 35 00:12:47.292 } 00:12:47.292 }, 00:12:47.292 "base_bdevs_list": [ 00:12:47.292 { 00:12:47.292 "name": "spare", 00:12:47.292 "uuid": "fa48a2e1-b729-5cc4-a6e4-4d8cac6ca7af", 00:12:47.292 "is_configured": true, 00:12:47.292 "data_offset": 2048, 00:12:47.292 "data_size": 63488 00:12:47.292 }, 00:12:47.292 { 00:12:47.292 "name": "BaseBdev2", 00:12:47.292 "uuid": "bf299635-cd88-5354-badb-e744f2f52729", 00:12:47.292 "is_configured": true, 00:12:47.292 "data_offset": 2048, 00:12:47.292 "data_size": 63488 00:12:47.292 } 00:12:47.292 ] 00:12:47.292 }' 00:12:47.292 10:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.292 10:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:47.292 10:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.292 10:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:47.292 10:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:48.672 10:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:48.672 10:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.672 10:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.672 10:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.672 10:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.672 10:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.672 10:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.672 10:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.672 10:57:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.672 10:57:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.672 10:57:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.672 10:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.673 "name": "raid_bdev1", 00:12:48.673 "uuid": "a6db4f85-9992-46b0-83bb-37ec442c3395", 00:12:48.673 "strip_size_kb": 0, 00:12:48.673 "state": "online", 00:12:48.673 "raid_level": "raid1", 00:12:48.673 "superblock": true, 00:12:48.673 "num_base_bdevs": 2, 00:12:48.673 "num_base_bdevs_discovered": 2, 00:12:48.673 "num_base_bdevs_operational": 2, 00:12:48.673 "process": { 00:12:48.673 "type": "rebuild", 00:12:48.673 "target": "spare", 00:12:48.673 "progress": { 00:12:48.673 "blocks": 45056, 00:12:48.673 "percent": 70 00:12:48.673 } 00:12:48.673 }, 00:12:48.673 "base_bdevs_list": [ 00:12:48.673 { 00:12:48.673 "name": "spare", 00:12:48.673 "uuid": "fa48a2e1-b729-5cc4-a6e4-4d8cac6ca7af", 00:12:48.673 "is_configured": true, 00:12:48.673 "data_offset": 2048, 00:12:48.673 "data_size": 63488 00:12:48.673 }, 00:12:48.673 { 00:12:48.673 "name": "BaseBdev2", 00:12:48.673 "uuid": "bf299635-cd88-5354-badb-e744f2f52729", 00:12:48.673 "is_configured": true, 00:12:48.673 "data_offset": 2048, 00:12:48.673 "data_size": 63488 00:12:48.673 } 00:12:48.673 ] 00:12:48.673 }' 00:12:48.673 10:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.673 10:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.673 10:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.673 10:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.673 10:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:49.242 [2024-11-15 10:57:56.023250] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:49.242 [2024-11-15 10:57:56.023357] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:49.242 [2024-11-15 10:57:56.023522] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.502 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:49.502 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:49.502 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.502 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:49.502 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:49.502 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.502 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.502 10:57:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.502 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.502 10:57:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.502 10:57:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.502 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.502 "name": "raid_bdev1", 00:12:49.502 "uuid": "a6db4f85-9992-46b0-83bb-37ec442c3395", 00:12:49.502 "strip_size_kb": 0, 00:12:49.502 "state": "online", 00:12:49.502 "raid_level": "raid1", 00:12:49.502 "superblock": true, 00:12:49.502 "num_base_bdevs": 2, 00:12:49.502 "num_base_bdevs_discovered": 2, 00:12:49.502 "num_base_bdevs_operational": 2, 00:12:49.502 "base_bdevs_list": [ 00:12:49.502 { 00:12:49.502 "name": "spare", 00:12:49.502 "uuid": "fa48a2e1-b729-5cc4-a6e4-4d8cac6ca7af", 00:12:49.502 "is_configured": true, 00:12:49.502 "data_offset": 2048, 00:12:49.502 "data_size": 63488 00:12:49.502 }, 00:12:49.502 { 00:12:49.502 "name": "BaseBdev2", 00:12:49.502 "uuid": "bf299635-cd88-5354-badb-e744f2f52729", 00:12:49.502 "is_configured": true, 00:12:49.502 "data_offset": 2048, 00:12:49.502 "data_size": 63488 00:12:49.502 } 00:12:49.502 ] 00:12:49.502 }' 00:12:49.502 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.502 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:49.502 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.762 "name": "raid_bdev1", 00:12:49.762 "uuid": "a6db4f85-9992-46b0-83bb-37ec442c3395", 00:12:49.762 "strip_size_kb": 0, 00:12:49.762 "state": "online", 00:12:49.762 "raid_level": "raid1", 00:12:49.762 "superblock": true, 00:12:49.762 "num_base_bdevs": 2, 00:12:49.762 "num_base_bdevs_discovered": 2, 00:12:49.762 "num_base_bdevs_operational": 2, 00:12:49.762 "base_bdevs_list": [ 00:12:49.762 { 00:12:49.762 "name": "spare", 00:12:49.762 "uuid": "fa48a2e1-b729-5cc4-a6e4-4d8cac6ca7af", 00:12:49.762 "is_configured": true, 00:12:49.762 "data_offset": 2048, 00:12:49.762 "data_size": 63488 00:12:49.762 }, 00:12:49.762 { 00:12:49.762 "name": "BaseBdev2", 00:12:49.762 "uuid": "bf299635-cd88-5354-badb-e744f2f52729", 00:12:49.762 "is_configured": true, 00:12:49.762 "data_offset": 2048, 00:12:49.762 "data_size": 63488 00:12:49.762 } 00:12:49.762 ] 00:12:49.762 }' 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.762 "name": "raid_bdev1", 00:12:49.762 "uuid": "a6db4f85-9992-46b0-83bb-37ec442c3395", 00:12:49.762 "strip_size_kb": 0, 00:12:49.762 "state": "online", 00:12:49.762 "raid_level": "raid1", 00:12:49.762 "superblock": true, 00:12:49.762 "num_base_bdevs": 2, 00:12:49.762 "num_base_bdevs_discovered": 2, 00:12:49.762 "num_base_bdevs_operational": 2, 00:12:49.762 "base_bdevs_list": [ 00:12:49.762 { 00:12:49.762 "name": "spare", 00:12:49.762 "uuid": "fa48a2e1-b729-5cc4-a6e4-4d8cac6ca7af", 00:12:49.762 "is_configured": true, 00:12:49.762 "data_offset": 2048, 00:12:49.762 "data_size": 63488 00:12:49.762 }, 00:12:49.762 { 00:12:49.762 "name": "BaseBdev2", 00:12:49.762 "uuid": "bf299635-cd88-5354-badb-e744f2f52729", 00:12:49.762 "is_configured": true, 00:12:49.762 "data_offset": 2048, 00:12:49.762 "data_size": 63488 00:12:49.762 } 00:12:49.762 ] 00:12:49.762 }' 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.762 10:57:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.330 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:50.330 10:57:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.330 10:57:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.331 [2024-11-15 10:57:56.983259] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:50.331 [2024-11-15 10:57:56.983314] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:50.331 [2024-11-15 10:57:56.983429] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:50.331 [2024-11-15 10:57:56.983519] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:50.331 [2024-11-15 10:57:56.983536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:50.331 10:57:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.331 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.331 10:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:50.331 10:57:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.331 10:57:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.331 10:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.331 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:50.331 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:50.331 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:50.331 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:50.331 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:50.331 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:50.331 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:50.331 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:50.331 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:50.331 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:50.331 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:50.331 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:50.331 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:50.331 /dev/nbd0 00:12:50.590 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:50.590 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:50.590 10:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:50.590 10:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:12:50.590 10:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:50.590 10:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:50.590 10:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:50.590 10:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:12:50.590 10:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:50.591 10:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:50.591 10:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.591 1+0 records in 00:12:50.591 1+0 records out 00:12:50.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429004 s, 9.5 MB/s 00:12:50.591 10:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.591 10:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:12:50.591 10:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.591 10:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:50.591 10:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:12:50.591 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:50.591 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:50.591 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:50.851 /dev/nbd1 00:12:50.851 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:50.851 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:50.851 10:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:50.851 10:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:12:50.851 10:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:50.851 10:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:50.851 10:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:50.851 10:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:12:50.851 10:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:50.851 10:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:50.851 10:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.851 1+0 records in 00:12:50.851 1+0 records out 00:12:50.851 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465128 s, 8.8 MB/s 00:12:50.851 10:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.851 10:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:12:50.851 10:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.851 10:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:50.851 10:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:12:50.851 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:50.851 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:50.851 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:50.851 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:50.851 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:50.851 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:50.851 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:50.851 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:50.851 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.851 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:51.111 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:51.111 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:51.111 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:51.111 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.111 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.111 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:51.111 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:51.111 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.111 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.111 10:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:51.371 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:51.371 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:51.371 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:51.371 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.371 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.371 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:51.371 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:51.371 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.371 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:51.371 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:51.371 10:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.371 10:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.371 10:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.371 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:51.371 10:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.371 10:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.371 [2024-11-15 10:57:58.212852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:51.371 [2024-11-15 10:57:58.212923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.371 [2024-11-15 10:57:58.212949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:51.371 [2024-11-15 10:57:58.212961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.371 [2024-11-15 10:57:58.215513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.371 [2024-11-15 10:57:58.215560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:51.371 [2024-11-15 10:57:58.215675] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:51.371 [2024-11-15 10:57:58.215775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:51.371 [2024-11-15 10:57:58.215995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:51.371 spare 00:12:51.371 10:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.371 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:51.371 10:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.371 10:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.631 [2024-11-15 10:57:58.315932] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:51.631 [2024-11-15 10:57:58.315970] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:51.631 [2024-11-15 10:57:58.316279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:51.631 [2024-11-15 10:57:58.316498] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:51.631 [2024-11-15 10:57:58.316522] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:51.631 [2024-11-15 10:57:58.316735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.631 10:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.631 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:51.631 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:51.631 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.631 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.631 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.631 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:51.631 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.631 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.631 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.631 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.631 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.631 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.631 10:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.631 10:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.632 10:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.632 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.632 "name": "raid_bdev1", 00:12:51.632 "uuid": "a6db4f85-9992-46b0-83bb-37ec442c3395", 00:12:51.632 "strip_size_kb": 0, 00:12:51.632 "state": "online", 00:12:51.632 "raid_level": "raid1", 00:12:51.632 "superblock": true, 00:12:51.632 "num_base_bdevs": 2, 00:12:51.632 "num_base_bdevs_discovered": 2, 00:12:51.632 "num_base_bdevs_operational": 2, 00:12:51.632 "base_bdevs_list": [ 00:12:51.632 { 00:12:51.632 "name": "spare", 00:12:51.632 "uuid": "fa48a2e1-b729-5cc4-a6e4-4d8cac6ca7af", 00:12:51.632 "is_configured": true, 00:12:51.632 "data_offset": 2048, 00:12:51.632 "data_size": 63488 00:12:51.632 }, 00:12:51.632 { 00:12:51.632 "name": "BaseBdev2", 00:12:51.632 "uuid": "bf299635-cd88-5354-badb-e744f2f52729", 00:12:51.632 "is_configured": true, 00:12:51.632 "data_offset": 2048, 00:12:51.632 "data_size": 63488 00:12:51.632 } 00:12:51.632 ] 00:12:51.632 }' 00:12:51.632 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.632 10:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.891 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:51.891 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.891 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:51.891 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:51.891 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.891 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.891 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.891 10:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.891 10:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.891 10:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.151 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.151 "name": "raid_bdev1", 00:12:52.151 "uuid": "a6db4f85-9992-46b0-83bb-37ec442c3395", 00:12:52.151 "strip_size_kb": 0, 00:12:52.151 "state": "online", 00:12:52.151 "raid_level": "raid1", 00:12:52.151 "superblock": true, 00:12:52.151 "num_base_bdevs": 2, 00:12:52.151 "num_base_bdevs_discovered": 2, 00:12:52.151 "num_base_bdevs_operational": 2, 00:12:52.151 "base_bdevs_list": [ 00:12:52.151 { 00:12:52.151 "name": "spare", 00:12:52.151 "uuid": "fa48a2e1-b729-5cc4-a6e4-4d8cac6ca7af", 00:12:52.151 "is_configured": true, 00:12:52.151 "data_offset": 2048, 00:12:52.151 "data_size": 63488 00:12:52.151 }, 00:12:52.151 { 00:12:52.151 "name": "BaseBdev2", 00:12:52.151 "uuid": "bf299635-cd88-5354-badb-e744f2f52729", 00:12:52.151 "is_configured": true, 00:12:52.151 "data_offset": 2048, 00:12:52.151 "data_size": 63488 00:12:52.151 } 00:12:52.151 ] 00:12:52.151 }' 00:12:52.151 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.151 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:52.151 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.151 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:52.151 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:52.151 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.151 10:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.152 10:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.152 10:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.152 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.152 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:52.152 10:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.152 10:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.152 [2024-11-15 10:57:58.971984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:52.152 10:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.152 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:52.152 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.152 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.152 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.152 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.152 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:52.152 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.152 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.152 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.152 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.152 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.152 10:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.152 10:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.152 10:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.152 10:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.152 10:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.152 "name": "raid_bdev1", 00:12:52.152 "uuid": "a6db4f85-9992-46b0-83bb-37ec442c3395", 00:12:52.152 "strip_size_kb": 0, 00:12:52.152 "state": "online", 00:12:52.152 "raid_level": "raid1", 00:12:52.152 "superblock": true, 00:12:52.152 "num_base_bdevs": 2, 00:12:52.152 "num_base_bdevs_discovered": 1, 00:12:52.152 "num_base_bdevs_operational": 1, 00:12:52.152 "base_bdevs_list": [ 00:12:52.152 { 00:12:52.152 "name": null, 00:12:52.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.152 "is_configured": false, 00:12:52.152 "data_offset": 0, 00:12:52.152 "data_size": 63488 00:12:52.152 }, 00:12:52.152 { 00:12:52.152 "name": "BaseBdev2", 00:12:52.152 "uuid": "bf299635-cd88-5354-badb-e744f2f52729", 00:12:52.152 "is_configured": true, 00:12:52.152 "data_offset": 2048, 00:12:52.152 "data_size": 63488 00:12:52.152 } 00:12:52.152 ] 00:12:52.152 }' 00:12:52.152 10:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.152 10:57:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.723 10:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:52.723 10:57:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.723 10:57:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.723 [2024-11-15 10:57:59.467132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:52.723 [2024-11-15 10:57:59.467354] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:52.723 [2024-11-15 10:57:59.467381] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:52.723 [2024-11-15 10:57:59.467421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:52.723 [2024-11-15 10:57:59.483960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:52.723 10:57:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.723 10:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:52.723 [2024-11-15 10:57:59.486028] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:53.660 10:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:53.660 10:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.660 10:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:53.660 10:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:53.660 10:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.660 10:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.660 10:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.660 10:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.660 10:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.660 10:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.660 10:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.660 "name": "raid_bdev1", 00:12:53.660 "uuid": "a6db4f85-9992-46b0-83bb-37ec442c3395", 00:12:53.660 "strip_size_kb": 0, 00:12:53.660 "state": "online", 00:12:53.660 "raid_level": "raid1", 00:12:53.660 "superblock": true, 00:12:53.660 "num_base_bdevs": 2, 00:12:53.660 "num_base_bdevs_discovered": 2, 00:12:53.660 "num_base_bdevs_operational": 2, 00:12:53.660 "process": { 00:12:53.660 "type": "rebuild", 00:12:53.660 "target": "spare", 00:12:53.660 "progress": { 00:12:53.660 "blocks": 20480, 00:12:53.660 "percent": 32 00:12:53.660 } 00:12:53.660 }, 00:12:53.660 "base_bdevs_list": [ 00:12:53.660 { 00:12:53.660 "name": "spare", 00:12:53.660 "uuid": "fa48a2e1-b729-5cc4-a6e4-4d8cac6ca7af", 00:12:53.660 "is_configured": true, 00:12:53.660 "data_offset": 2048, 00:12:53.660 "data_size": 63488 00:12:53.660 }, 00:12:53.660 { 00:12:53.660 "name": "BaseBdev2", 00:12:53.660 "uuid": "bf299635-cd88-5354-badb-e744f2f52729", 00:12:53.660 "is_configured": true, 00:12:53.660 "data_offset": 2048, 00:12:53.660 "data_size": 63488 00:12:53.660 } 00:12:53.660 ] 00:12:53.660 }' 00:12:53.660 10:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.918 10:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:53.918 10:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.918 10:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:53.918 10:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:53.918 10:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.918 10:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.918 [2024-11-15 10:58:00.649447] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:53.918 [2024-11-15 10:58:00.691667] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:53.918 [2024-11-15 10:58:00.691748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.918 [2024-11-15 10:58:00.691766] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:53.918 [2024-11-15 10:58:00.691777] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:53.918 10:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.918 10:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:53.918 10:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:53.918 10:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.918 10:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.918 10:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.918 10:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:53.918 10:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.918 10:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.918 10:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.918 10:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.918 10:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.918 10:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.918 10:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.918 10:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.919 10:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.919 10:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.919 "name": "raid_bdev1", 00:12:53.919 "uuid": "a6db4f85-9992-46b0-83bb-37ec442c3395", 00:12:53.919 "strip_size_kb": 0, 00:12:53.919 "state": "online", 00:12:53.919 "raid_level": "raid1", 00:12:53.919 "superblock": true, 00:12:53.919 "num_base_bdevs": 2, 00:12:53.919 "num_base_bdevs_discovered": 1, 00:12:53.919 "num_base_bdevs_operational": 1, 00:12:53.919 "base_bdevs_list": [ 00:12:53.919 { 00:12:53.919 "name": null, 00:12:53.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.919 "is_configured": false, 00:12:53.919 "data_offset": 0, 00:12:53.919 "data_size": 63488 00:12:53.919 }, 00:12:53.919 { 00:12:53.919 "name": "BaseBdev2", 00:12:53.919 "uuid": "bf299635-cd88-5354-badb-e744f2f52729", 00:12:53.919 "is_configured": true, 00:12:53.919 "data_offset": 2048, 00:12:53.919 "data_size": 63488 00:12:53.919 } 00:12:53.919 ] 00:12:53.919 }' 00:12:53.919 10:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.919 10:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.488 10:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:54.488 10:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.488 10:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.488 [2024-11-15 10:58:01.183951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:54.488 [2024-11-15 10:58:01.184032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.488 [2024-11-15 10:58:01.184058] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:54.488 [2024-11-15 10:58:01.184073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.488 [2024-11-15 10:58:01.184592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.488 [2024-11-15 10:58:01.184628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:54.488 [2024-11-15 10:58:01.184737] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:54.488 [2024-11-15 10:58:01.184764] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:54.488 [2024-11-15 10:58:01.184776] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:54.488 [2024-11-15 10:58:01.184807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:54.488 [2024-11-15 10:58:01.202789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:54.488 spare 00:12:54.488 10:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.488 [2024-11-15 10:58:01.204715] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:54.488 10:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:55.426 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:55.426 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.426 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:55.426 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:55.426 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.426 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.426 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.426 10:58:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.426 10:58:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.426 10:58:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.426 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.426 "name": "raid_bdev1", 00:12:55.426 "uuid": "a6db4f85-9992-46b0-83bb-37ec442c3395", 00:12:55.426 "strip_size_kb": 0, 00:12:55.426 "state": "online", 00:12:55.426 "raid_level": "raid1", 00:12:55.426 "superblock": true, 00:12:55.426 "num_base_bdevs": 2, 00:12:55.426 "num_base_bdevs_discovered": 2, 00:12:55.426 "num_base_bdevs_operational": 2, 00:12:55.426 "process": { 00:12:55.426 "type": "rebuild", 00:12:55.426 "target": "spare", 00:12:55.426 "progress": { 00:12:55.426 "blocks": 20480, 00:12:55.426 "percent": 32 00:12:55.426 } 00:12:55.426 }, 00:12:55.426 "base_bdevs_list": [ 00:12:55.426 { 00:12:55.426 "name": "spare", 00:12:55.426 "uuid": "fa48a2e1-b729-5cc4-a6e4-4d8cac6ca7af", 00:12:55.426 "is_configured": true, 00:12:55.426 "data_offset": 2048, 00:12:55.426 "data_size": 63488 00:12:55.426 }, 00:12:55.426 { 00:12:55.426 "name": "BaseBdev2", 00:12:55.426 "uuid": "bf299635-cd88-5354-badb-e744f2f52729", 00:12:55.426 "is_configured": true, 00:12:55.426 "data_offset": 2048, 00:12:55.426 "data_size": 63488 00:12:55.426 } 00:12:55.426 ] 00:12:55.426 }' 00:12:55.426 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.426 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:55.426 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.686 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:55.686 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:55.686 10:58:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.686 10:58:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.686 [2024-11-15 10:58:02.360682] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:55.686 [2024-11-15 10:58:02.410364] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:55.686 [2024-11-15 10:58:02.410443] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.686 [2024-11-15 10:58:02.410464] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:55.686 [2024-11-15 10:58:02.410473] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:55.686 10:58:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.686 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:55.686 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.686 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.686 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.686 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.686 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:55.686 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.686 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.686 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.686 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.686 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.686 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.686 10:58:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.686 10:58:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.686 10:58:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.686 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.686 "name": "raid_bdev1", 00:12:55.686 "uuid": "a6db4f85-9992-46b0-83bb-37ec442c3395", 00:12:55.686 "strip_size_kb": 0, 00:12:55.686 "state": "online", 00:12:55.686 "raid_level": "raid1", 00:12:55.686 "superblock": true, 00:12:55.686 "num_base_bdevs": 2, 00:12:55.686 "num_base_bdevs_discovered": 1, 00:12:55.686 "num_base_bdevs_operational": 1, 00:12:55.686 "base_bdevs_list": [ 00:12:55.686 { 00:12:55.686 "name": null, 00:12:55.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.686 "is_configured": false, 00:12:55.686 "data_offset": 0, 00:12:55.686 "data_size": 63488 00:12:55.686 }, 00:12:55.686 { 00:12:55.686 "name": "BaseBdev2", 00:12:55.686 "uuid": "bf299635-cd88-5354-badb-e744f2f52729", 00:12:55.686 "is_configured": true, 00:12:55.686 "data_offset": 2048, 00:12:55.686 "data_size": 63488 00:12:55.686 } 00:12:55.686 ] 00:12:55.686 }' 00:12:55.686 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.686 10:58:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.254 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:56.254 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.254 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:56.254 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:56.254 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.254 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.254 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.254 10:58:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.254 10:58:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.254 10:58:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.254 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.254 "name": "raid_bdev1", 00:12:56.254 "uuid": "a6db4f85-9992-46b0-83bb-37ec442c3395", 00:12:56.254 "strip_size_kb": 0, 00:12:56.254 "state": "online", 00:12:56.254 "raid_level": "raid1", 00:12:56.254 "superblock": true, 00:12:56.254 "num_base_bdevs": 2, 00:12:56.254 "num_base_bdevs_discovered": 1, 00:12:56.254 "num_base_bdevs_operational": 1, 00:12:56.254 "base_bdevs_list": [ 00:12:56.254 { 00:12:56.254 "name": null, 00:12:56.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.254 "is_configured": false, 00:12:56.254 "data_offset": 0, 00:12:56.254 "data_size": 63488 00:12:56.254 }, 00:12:56.254 { 00:12:56.254 "name": "BaseBdev2", 00:12:56.254 "uuid": "bf299635-cd88-5354-badb-e744f2f52729", 00:12:56.254 "is_configured": true, 00:12:56.254 "data_offset": 2048, 00:12:56.254 "data_size": 63488 00:12:56.254 } 00:12:56.254 ] 00:12:56.254 }' 00:12:56.254 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.254 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:56.254 10:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.254 10:58:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:56.254 10:58:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:56.254 10:58:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.254 10:58:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.254 10:58:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.254 10:58:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:56.254 10:58:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.254 10:58:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.254 [2024-11-15 10:58:03.057261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:56.254 [2024-11-15 10:58:03.057348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.254 [2024-11-15 10:58:03.057375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:56.254 [2024-11-15 10:58:03.057397] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.254 [2024-11-15 10:58:03.057879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.254 [2024-11-15 10:58:03.057910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:56.254 [2024-11-15 10:58:03.058006] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:56.254 [2024-11-15 10:58:03.058031] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:56.254 [2024-11-15 10:58:03.058043] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:56.254 [2024-11-15 10:58:03.058054] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:56.254 BaseBdev1 00:12:56.254 10:58:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.254 10:58:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:57.221 10:58:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:57.221 10:58:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.221 10:58:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.221 10:58:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:57.221 10:58:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:57.221 10:58:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:57.221 10:58:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.221 10:58:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.221 10:58:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.221 10:58:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.221 10:58:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.221 10:58:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.221 10:58:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.221 10:58:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.221 10:58:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.221 10:58:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.221 "name": "raid_bdev1", 00:12:57.221 "uuid": "a6db4f85-9992-46b0-83bb-37ec442c3395", 00:12:57.221 "strip_size_kb": 0, 00:12:57.221 "state": "online", 00:12:57.221 "raid_level": "raid1", 00:12:57.221 "superblock": true, 00:12:57.221 "num_base_bdevs": 2, 00:12:57.221 "num_base_bdevs_discovered": 1, 00:12:57.221 "num_base_bdevs_operational": 1, 00:12:57.221 "base_bdevs_list": [ 00:12:57.221 { 00:12:57.221 "name": null, 00:12:57.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.221 "is_configured": false, 00:12:57.221 "data_offset": 0, 00:12:57.221 "data_size": 63488 00:12:57.221 }, 00:12:57.221 { 00:12:57.221 "name": "BaseBdev2", 00:12:57.221 "uuid": "bf299635-cd88-5354-badb-e744f2f52729", 00:12:57.221 "is_configured": true, 00:12:57.221 "data_offset": 2048, 00:12:57.221 "data_size": 63488 00:12:57.221 } 00:12:57.221 ] 00:12:57.221 }' 00:12:57.221 10:58:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.221 10:58:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.788 10:58:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:57.788 10:58:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.788 10:58:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:57.788 10:58:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:57.788 10:58:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.789 10:58:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.789 10:58:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.789 10:58:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.789 10:58:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.789 10:58:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.789 10:58:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.789 "name": "raid_bdev1", 00:12:57.789 "uuid": "a6db4f85-9992-46b0-83bb-37ec442c3395", 00:12:57.789 "strip_size_kb": 0, 00:12:57.789 "state": "online", 00:12:57.789 "raid_level": "raid1", 00:12:57.789 "superblock": true, 00:12:57.789 "num_base_bdevs": 2, 00:12:57.789 "num_base_bdevs_discovered": 1, 00:12:57.789 "num_base_bdevs_operational": 1, 00:12:57.789 "base_bdevs_list": [ 00:12:57.789 { 00:12:57.789 "name": null, 00:12:57.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.789 "is_configured": false, 00:12:57.789 "data_offset": 0, 00:12:57.789 "data_size": 63488 00:12:57.789 }, 00:12:57.789 { 00:12:57.789 "name": "BaseBdev2", 00:12:57.789 "uuid": "bf299635-cd88-5354-badb-e744f2f52729", 00:12:57.789 "is_configured": true, 00:12:57.789 "data_offset": 2048, 00:12:57.789 "data_size": 63488 00:12:57.789 } 00:12:57.789 ] 00:12:57.789 }' 00:12:57.789 10:58:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.789 10:58:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:57.789 10:58:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.789 10:58:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:57.789 10:58:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:57.789 10:58:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:12:57.789 10:58:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:57.789 10:58:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:57.789 10:58:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:57.789 10:58:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:57.789 10:58:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:57.789 10:58:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:57.789 10:58:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.789 10:58:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.789 [2024-11-15 10:58:04.710531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:57.789 [2024-11-15 10:58:04.710733] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:57.789 [2024-11-15 10:58:04.710760] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:58.047 request: 00:12:58.047 { 00:12:58.047 "base_bdev": "BaseBdev1", 00:12:58.047 "raid_bdev": "raid_bdev1", 00:12:58.047 "method": "bdev_raid_add_base_bdev", 00:12:58.047 "req_id": 1 00:12:58.047 } 00:12:58.047 Got JSON-RPC error response 00:12:58.047 response: 00:12:58.047 { 00:12:58.047 "code": -22, 00:12:58.047 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:58.047 } 00:12:58.047 10:58:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:58.047 10:58:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:12:58.047 10:58:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:58.047 10:58:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:58.047 10:58:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:58.047 10:58:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:58.985 10:58:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:58.985 10:58:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.985 10:58:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.985 10:58:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.985 10:58:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.985 10:58:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:58.985 10:58:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.985 10:58:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.985 10:58:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.985 10:58:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.985 10:58:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.985 10:58:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.985 10:58:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.985 10:58:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.985 10:58:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.985 10:58:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.985 "name": "raid_bdev1", 00:12:58.985 "uuid": "a6db4f85-9992-46b0-83bb-37ec442c3395", 00:12:58.985 "strip_size_kb": 0, 00:12:58.985 "state": "online", 00:12:58.985 "raid_level": "raid1", 00:12:58.985 "superblock": true, 00:12:58.985 "num_base_bdevs": 2, 00:12:58.985 "num_base_bdevs_discovered": 1, 00:12:58.985 "num_base_bdevs_operational": 1, 00:12:58.985 "base_bdevs_list": [ 00:12:58.985 { 00:12:58.985 "name": null, 00:12:58.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.985 "is_configured": false, 00:12:58.985 "data_offset": 0, 00:12:58.985 "data_size": 63488 00:12:58.985 }, 00:12:58.985 { 00:12:58.985 "name": "BaseBdev2", 00:12:58.985 "uuid": "bf299635-cd88-5354-badb-e744f2f52729", 00:12:58.985 "is_configured": true, 00:12:58.985 "data_offset": 2048, 00:12:58.985 "data_size": 63488 00:12:58.985 } 00:12:58.985 ] 00:12:58.985 }' 00:12:58.985 10:58:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.985 10:58:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.554 10:58:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:59.554 10:58:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.554 10:58:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:59.554 10:58:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:59.554 10:58:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.554 10:58:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.554 10:58:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.554 10:58:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.554 10:58:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.554 10:58:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.554 10:58:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.554 "name": "raid_bdev1", 00:12:59.554 "uuid": "a6db4f85-9992-46b0-83bb-37ec442c3395", 00:12:59.554 "strip_size_kb": 0, 00:12:59.554 "state": "online", 00:12:59.554 "raid_level": "raid1", 00:12:59.554 "superblock": true, 00:12:59.554 "num_base_bdevs": 2, 00:12:59.554 "num_base_bdevs_discovered": 1, 00:12:59.554 "num_base_bdevs_operational": 1, 00:12:59.554 "base_bdevs_list": [ 00:12:59.554 { 00:12:59.554 "name": null, 00:12:59.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.554 "is_configured": false, 00:12:59.554 "data_offset": 0, 00:12:59.554 "data_size": 63488 00:12:59.554 }, 00:12:59.554 { 00:12:59.554 "name": "BaseBdev2", 00:12:59.554 "uuid": "bf299635-cd88-5354-badb-e744f2f52729", 00:12:59.554 "is_configured": true, 00:12:59.554 "data_offset": 2048, 00:12:59.554 "data_size": 63488 00:12:59.554 } 00:12:59.554 ] 00:12:59.554 }' 00:12:59.554 10:58:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.554 10:58:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:59.554 10:58:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.554 10:58:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:59.554 10:58:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75891 00:12:59.554 10:58:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 75891 ']' 00:12:59.554 10:58:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 75891 00:12:59.554 10:58:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:12:59.554 10:58:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:59.554 10:58:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75891 00:12:59.554 10:58:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:59.554 10:58:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:59.554 killing process with pid 75891 00:12:59.554 10:58:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75891' 00:12:59.554 10:58:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 75891 00:12:59.554 Received shutdown signal, test time was about 60.000000 seconds 00:12:59.554 00:12:59.554 Latency(us) 00:12:59.554 [2024-11-15T10:58:06.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.554 [2024-11-15T10:58:06.482Z] =================================================================================================================== 00:12:59.554 [2024-11-15T10:58:06.482Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:59.554 [2024-11-15 10:58:06.387574] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:59.554 10:58:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 75891 00:12:59.554 [2024-11-15 10:58:06.387727] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:59.554 [2024-11-15 10:58:06.387797] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:59.554 [2024-11-15 10:58:06.387818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:59.813 [2024-11-15 10:58:06.691455] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:01.233 00:13:01.233 real 0m23.948s 00:13:01.233 user 0m29.294s 00:13:01.233 sys 0m3.792s 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.233 ************************************ 00:13:01.233 END TEST raid_rebuild_test_sb 00:13:01.233 ************************************ 00:13:01.233 10:58:07 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:13:01.233 10:58:07 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:01.233 10:58:07 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:01.233 10:58:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:01.233 ************************************ 00:13:01.233 START TEST raid_rebuild_test_io 00:13:01.233 ************************************ 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false true true 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76631 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76631 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 76631 ']' 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:01.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:01.233 10:58:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.233 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:01.233 Zero copy mechanism will not be used. 00:13:01.233 [2024-11-15 10:58:07.997015] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:13:01.233 [2024-11-15 10:58:07.997208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76631 ] 00:13:01.492 [2024-11-15 10:58:08.179502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.492 [2024-11-15 10:58:08.297452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.752 [2024-11-15 10:58:08.499922] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:01.752 [2024-11-15 10:58:08.500008] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:02.011 10:58:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:02.011 10:58:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:13:02.011 10:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:02.011 10:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:02.011 10:58:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.011 10:58:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.271 BaseBdev1_malloc 00:13:02.271 10:58:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.271 10:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:02.271 10:58:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.271 10:58:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.271 [2024-11-15 10:58:08.969456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:02.271 [2024-11-15 10:58:08.969586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.271 [2024-11-15 10:58:08.969634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:02.271 [2024-11-15 10:58:08.969691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.271 [2024-11-15 10:58:08.971873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.271 [2024-11-15 10:58:08.971956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:02.272 BaseBdev1 00:13:02.272 10:58:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.272 10:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:02.272 10:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:02.272 10:58:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.272 10:58:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.272 BaseBdev2_malloc 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.272 [2024-11-15 10:58:09.024393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:02.272 [2024-11-15 10:58:09.024515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.272 [2024-11-15 10:58:09.024556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:02.272 [2024-11-15 10:58:09.024611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.272 [2024-11-15 10:58:09.026730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.272 [2024-11-15 10:58:09.026816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:02.272 BaseBdev2 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.272 spare_malloc 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.272 spare_delay 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.272 [2024-11-15 10:58:09.104943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:02.272 [2024-11-15 10:58:09.105015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.272 [2024-11-15 10:58:09.105038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:02.272 [2024-11-15 10:58:09.105052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.272 [2024-11-15 10:58:09.107343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.272 [2024-11-15 10:58:09.107466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:02.272 spare 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.272 [2024-11-15 10:58:09.116992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:02.272 [2024-11-15 10:58:09.118919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:02.272 [2024-11-15 10:58:09.119053] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:02.272 [2024-11-15 10:58:09.119073] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:02.272 [2024-11-15 10:58:09.119416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:02.272 [2024-11-15 10:58:09.119633] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:02.272 [2024-11-15 10:58:09.119647] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:02.272 [2024-11-15 10:58:09.119853] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.272 "name": "raid_bdev1", 00:13:02.272 "uuid": "8c487751-65d0-469c-9230-b51ce5a55dec", 00:13:02.272 "strip_size_kb": 0, 00:13:02.272 "state": "online", 00:13:02.272 "raid_level": "raid1", 00:13:02.272 "superblock": false, 00:13:02.272 "num_base_bdevs": 2, 00:13:02.272 "num_base_bdevs_discovered": 2, 00:13:02.272 "num_base_bdevs_operational": 2, 00:13:02.272 "base_bdevs_list": [ 00:13:02.272 { 00:13:02.272 "name": "BaseBdev1", 00:13:02.272 "uuid": "0bcddfdb-7f8f-5543-8b05-398e29f8c2cc", 00:13:02.272 "is_configured": true, 00:13:02.272 "data_offset": 0, 00:13:02.272 "data_size": 65536 00:13:02.272 }, 00:13:02.272 { 00:13:02.272 "name": "BaseBdev2", 00:13:02.272 "uuid": "bb852595-6e56-55a2-a611-4f96a9fd4fcf", 00:13:02.272 "is_configured": true, 00:13:02.272 "data_offset": 0, 00:13:02.272 "data_size": 65536 00:13:02.272 } 00:13:02.272 ] 00:13:02.272 }' 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.272 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.841 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:02.841 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:02.841 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.841 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.841 [2024-11-15 10:58:09.612535] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:02.841 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.841 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:02.841 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.841 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.841 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.841 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:02.841 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.841 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:02.841 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:02.841 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:02.841 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:02.841 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.841 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.842 [2024-11-15 10:58:09.700106] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:02.842 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.842 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:02.842 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.842 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.842 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.842 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.842 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:02.842 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.842 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.842 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.842 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.842 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.842 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.842 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.842 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.842 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.842 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.842 "name": "raid_bdev1", 00:13:02.842 "uuid": "8c487751-65d0-469c-9230-b51ce5a55dec", 00:13:02.842 "strip_size_kb": 0, 00:13:02.842 "state": "online", 00:13:02.842 "raid_level": "raid1", 00:13:02.842 "superblock": false, 00:13:02.842 "num_base_bdevs": 2, 00:13:02.842 "num_base_bdevs_discovered": 1, 00:13:02.842 "num_base_bdevs_operational": 1, 00:13:02.842 "base_bdevs_list": [ 00:13:02.842 { 00:13:02.842 "name": null, 00:13:02.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.842 "is_configured": false, 00:13:02.842 "data_offset": 0, 00:13:02.842 "data_size": 65536 00:13:02.842 }, 00:13:02.842 { 00:13:02.842 "name": "BaseBdev2", 00:13:02.842 "uuid": "bb852595-6e56-55a2-a611-4f96a9fd4fcf", 00:13:02.842 "is_configured": true, 00:13:02.842 "data_offset": 0, 00:13:02.842 "data_size": 65536 00:13:02.842 } 00:13:02.842 ] 00:13:02.842 }' 00:13:02.842 10:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.842 10:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.101 [2024-11-15 10:58:09.809419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:03.101 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:03.101 Zero copy mechanism will not be used. 00:13:03.101 Running I/O for 60 seconds... 00:13:03.360 10:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:03.360 10:58:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.360 10:58:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.360 [2024-11-15 10:58:10.143494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:03.360 10:58:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.360 10:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:03.360 [2024-11-15 10:58:10.196460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:03.360 [2024-11-15 10:58:10.198454] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:03.619 [2024-11-15 10:58:10.312701] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:03.619 [2024-11-15 10:58:10.313433] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:03.619 [2024-11-15 10:58:10.434167] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:03.620 [2024-11-15 10:58:10.434547] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:03.879 [2024-11-15 10:58:10.792360] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:04.424 215.00 IOPS, 645.00 MiB/s [2024-11-15T10:58:11.353Z] 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:04.425 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.425 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:04.425 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:04.425 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.425 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.425 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.425 10:58:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.425 10:58:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.425 10:58:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.425 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.425 "name": "raid_bdev1", 00:13:04.425 "uuid": "8c487751-65d0-469c-9230-b51ce5a55dec", 00:13:04.425 "strip_size_kb": 0, 00:13:04.425 "state": "online", 00:13:04.425 "raid_level": "raid1", 00:13:04.425 "superblock": false, 00:13:04.425 "num_base_bdevs": 2, 00:13:04.425 "num_base_bdevs_discovered": 2, 00:13:04.425 "num_base_bdevs_operational": 2, 00:13:04.425 "process": { 00:13:04.425 "type": "rebuild", 00:13:04.425 "target": "spare", 00:13:04.425 "progress": { 00:13:04.425 "blocks": 14336, 00:13:04.425 "percent": 21 00:13:04.425 } 00:13:04.425 }, 00:13:04.425 "base_bdevs_list": [ 00:13:04.425 { 00:13:04.425 "name": "spare", 00:13:04.425 "uuid": "f81b672b-8bf8-5285-9e34-894d35d9f24e", 00:13:04.425 "is_configured": true, 00:13:04.425 "data_offset": 0, 00:13:04.425 "data_size": 65536 00:13:04.425 }, 00:13:04.425 { 00:13:04.425 "name": "BaseBdev2", 00:13:04.425 "uuid": "bb852595-6e56-55a2-a611-4f96a9fd4fcf", 00:13:04.425 "is_configured": true, 00:13:04.425 "data_offset": 0, 00:13:04.425 "data_size": 65536 00:13:04.425 } 00:13:04.425 ] 00:13:04.425 }' 00:13:04.425 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.425 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:04.425 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.707 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:04.707 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:04.707 10:58:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.707 10:58:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.707 [2024-11-15 10:58:11.346700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:04.707 [2024-11-15 10:58:11.473960] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:04.707 [2024-11-15 10:58:11.482372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.707 [2024-11-15 10:58:11.482435] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:04.707 [2024-11-15 10:58:11.482449] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:04.707 [2024-11-15 10:58:11.514532] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:04.707 10:58:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.707 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:04.707 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.707 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.707 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.707 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.707 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:04.707 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.707 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.707 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.707 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.707 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.707 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.707 10:58:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.707 10:58:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.707 10:58:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.707 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.707 "name": "raid_bdev1", 00:13:04.707 "uuid": "8c487751-65d0-469c-9230-b51ce5a55dec", 00:13:04.707 "strip_size_kb": 0, 00:13:04.707 "state": "online", 00:13:04.707 "raid_level": "raid1", 00:13:04.707 "superblock": false, 00:13:04.707 "num_base_bdevs": 2, 00:13:04.707 "num_base_bdevs_discovered": 1, 00:13:04.707 "num_base_bdevs_operational": 1, 00:13:04.707 "base_bdevs_list": [ 00:13:04.707 { 00:13:04.707 "name": null, 00:13:04.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.707 "is_configured": false, 00:13:04.707 "data_offset": 0, 00:13:04.707 "data_size": 65536 00:13:04.707 }, 00:13:04.707 { 00:13:04.707 "name": "BaseBdev2", 00:13:04.707 "uuid": "bb852595-6e56-55a2-a611-4f96a9fd4fcf", 00:13:04.707 "is_configured": true, 00:13:04.707 "data_offset": 0, 00:13:04.707 "data_size": 65536 00:13:04.707 } 00:13:04.707 ] 00:13:04.707 }' 00:13:04.707 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.707 10:58:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.225 174.00 IOPS, 522.00 MiB/s [2024-11-15T10:58:12.153Z] 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:05.225 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.225 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:05.225 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:05.225 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.225 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.225 10:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.225 10:58:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.225 10:58:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.225 10:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.225 10:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.225 "name": "raid_bdev1", 00:13:05.225 "uuid": "8c487751-65d0-469c-9230-b51ce5a55dec", 00:13:05.225 "strip_size_kb": 0, 00:13:05.225 "state": "online", 00:13:05.225 "raid_level": "raid1", 00:13:05.225 "superblock": false, 00:13:05.225 "num_base_bdevs": 2, 00:13:05.225 "num_base_bdevs_discovered": 1, 00:13:05.225 "num_base_bdevs_operational": 1, 00:13:05.225 "base_bdevs_list": [ 00:13:05.225 { 00:13:05.225 "name": null, 00:13:05.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.225 "is_configured": false, 00:13:05.225 "data_offset": 0, 00:13:05.225 "data_size": 65536 00:13:05.225 }, 00:13:05.225 { 00:13:05.225 "name": "BaseBdev2", 00:13:05.225 "uuid": "bb852595-6e56-55a2-a611-4f96a9fd4fcf", 00:13:05.225 "is_configured": true, 00:13:05.225 "data_offset": 0, 00:13:05.225 "data_size": 65536 00:13:05.225 } 00:13:05.225 ] 00:13:05.225 }' 00:13:05.225 10:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.225 10:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:05.225 10:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.225 10:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:05.225 10:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:05.225 10:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.225 10:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.482 [2024-11-15 10:58:12.153802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:05.482 10:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.482 10:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:05.482 [2024-11-15 10:58:12.214068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:05.482 [2024-11-15 10:58:12.216315] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:05.482 [2024-11-15 10:58:12.337073] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:05.482 [2024-11-15 10:58:12.337707] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:05.741 [2024-11-15 10:58:12.552396] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:05.741 [2024-11-15 10:58:12.552883] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:06.259 165.00 IOPS, 495.00 MiB/s [2024-11-15T10:58:13.187Z] [2024-11-15 10:58:13.015863] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.518 "name": "raid_bdev1", 00:13:06.518 "uuid": "8c487751-65d0-469c-9230-b51ce5a55dec", 00:13:06.518 "strip_size_kb": 0, 00:13:06.518 "state": "online", 00:13:06.518 "raid_level": "raid1", 00:13:06.518 "superblock": false, 00:13:06.518 "num_base_bdevs": 2, 00:13:06.518 "num_base_bdevs_discovered": 2, 00:13:06.518 "num_base_bdevs_operational": 2, 00:13:06.518 "process": { 00:13:06.518 "type": "rebuild", 00:13:06.518 "target": "spare", 00:13:06.518 "progress": { 00:13:06.518 "blocks": 12288, 00:13:06.518 "percent": 18 00:13:06.518 } 00:13:06.518 }, 00:13:06.518 "base_bdevs_list": [ 00:13:06.518 { 00:13:06.518 "name": "spare", 00:13:06.518 "uuid": "f81b672b-8bf8-5285-9e34-894d35d9f24e", 00:13:06.518 "is_configured": true, 00:13:06.518 "data_offset": 0, 00:13:06.518 "data_size": 65536 00:13:06.518 }, 00:13:06.518 { 00:13:06.518 "name": "BaseBdev2", 00:13:06.518 "uuid": "bb852595-6e56-55a2-a611-4f96a9fd4fcf", 00:13:06.518 "is_configured": true, 00:13:06.518 "data_offset": 0, 00:13:06.518 "data_size": 65536 00:13:06.518 } 00:13:06.518 ] 00:13:06.518 }' 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.518 [2024-11-15 10:58:13.249199] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=412 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.518 "name": "raid_bdev1", 00:13:06.518 "uuid": "8c487751-65d0-469c-9230-b51ce5a55dec", 00:13:06.518 "strip_size_kb": 0, 00:13:06.518 "state": "online", 00:13:06.518 "raid_level": "raid1", 00:13:06.518 "superblock": false, 00:13:06.518 "num_base_bdevs": 2, 00:13:06.518 "num_base_bdevs_discovered": 2, 00:13:06.518 "num_base_bdevs_operational": 2, 00:13:06.518 "process": { 00:13:06.518 "type": "rebuild", 00:13:06.518 "target": "spare", 00:13:06.518 "progress": { 00:13:06.518 "blocks": 14336, 00:13:06.518 "percent": 21 00:13:06.518 } 00:13:06.518 }, 00:13:06.518 "base_bdevs_list": [ 00:13:06.518 { 00:13:06.518 "name": "spare", 00:13:06.518 "uuid": "f81b672b-8bf8-5285-9e34-894d35d9f24e", 00:13:06.518 "is_configured": true, 00:13:06.518 "data_offset": 0, 00:13:06.518 "data_size": 65536 00:13:06.518 }, 00:13:06.518 { 00:13:06.518 "name": "BaseBdev2", 00:13:06.518 "uuid": "bb852595-6e56-55a2-a611-4f96a9fd4fcf", 00:13:06.518 "is_configured": true, 00:13:06.518 "data_offset": 0, 00:13:06.518 "data_size": 65536 00:13:06.518 } 00:13:06.518 ] 00:13:06.518 }' 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:06.518 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.778 [2024-11-15 10:58:13.466443] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:06.778 [2024-11-15 10:58:13.466878] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:06.778 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.778 10:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:07.037 [2024-11-15 10:58:13.715806] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:07.038 147.00 IOPS, 441.00 MiB/s [2024-11-15T10:58:13.966Z] [2024-11-15 10:58:13.943031] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:07.606 [2024-11-15 10:58:14.412308] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:07.606 10:58:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:07.606 10:58:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.606 10:58:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.606 10:58:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.606 10:58:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.606 10:58:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.606 10:58:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.606 10:58:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.606 10:58:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.606 10:58:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.606 10:58:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.606 10:58:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.606 "name": "raid_bdev1", 00:13:07.606 "uuid": "8c487751-65d0-469c-9230-b51ce5a55dec", 00:13:07.606 "strip_size_kb": 0, 00:13:07.606 "state": "online", 00:13:07.606 "raid_level": "raid1", 00:13:07.606 "superblock": false, 00:13:07.606 "num_base_bdevs": 2, 00:13:07.606 "num_base_bdevs_discovered": 2, 00:13:07.606 "num_base_bdevs_operational": 2, 00:13:07.606 "process": { 00:13:07.606 "type": "rebuild", 00:13:07.606 "target": "spare", 00:13:07.606 "progress": { 00:13:07.606 "blocks": 28672, 00:13:07.606 "percent": 43 00:13:07.606 } 00:13:07.606 }, 00:13:07.606 "base_bdevs_list": [ 00:13:07.606 { 00:13:07.606 "name": "spare", 00:13:07.606 "uuid": "f81b672b-8bf8-5285-9e34-894d35d9f24e", 00:13:07.606 "is_configured": true, 00:13:07.606 "data_offset": 0, 00:13:07.606 "data_size": 65536 00:13:07.606 }, 00:13:07.606 { 00:13:07.606 "name": "BaseBdev2", 00:13:07.606 "uuid": "bb852595-6e56-55a2-a611-4f96a9fd4fcf", 00:13:07.606 "is_configured": true, 00:13:07.606 "data_offset": 0, 00:13:07.606 "data_size": 65536 00:13:07.606 } 00:13:07.606 ] 00:13:07.606 }' 00:13:07.866 10:58:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.866 10:58:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:07.866 10:58:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.866 10:58:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.866 10:58:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:07.866 [2024-11-15 10:58:14.736299] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:08.125 130.00 IOPS, 390.00 MiB/s [2024-11-15T10:58:15.053Z] [2024-11-15 10:58:14.944188] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:08.384 [2024-11-15 10:58:15.190751] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:08.954 10:58:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:08.954 10:58:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.954 10:58:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.954 10:58:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.954 10:58:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.954 10:58:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.954 10:58:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.954 10:58:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.954 10:58:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.954 10:58:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.954 10:58:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.954 10:58:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.954 "name": "raid_bdev1", 00:13:08.954 "uuid": "8c487751-65d0-469c-9230-b51ce5a55dec", 00:13:08.954 "strip_size_kb": 0, 00:13:08.954 "state": "online", 00:13:08.954 "raid_level": "raid1", 00:13:08.954 "superblock": false, 00:13:08.954 "num_base_bdevs": 2, 00:13:08.954 "num_base_bdevs_discovered": 2, 00:13:08.954 "num_base_bdevs_operational": 2, 00:13:08.954 "process": { 00:13:08.954 "type": "rebuild", 00:13:08.954 "target": "spare", 00:13:08.954 "progress": { 00:13:08.954 "blocks": 43008, 00:13:08.954 "percent": 65 00:13:08.954 } 00:13:08.954 }, 00:13:08.954 "base_bdevs_list": [ 00:13:08.954 { 00:13:08.954 "name": "spare", 00:13:08.954 "uuid": "f81b672b-8bf8-5285-9e34-894d35d9f24e", 00:13:08.954 "is_configured": true, 00:13:08.954 "data_offset": 0, 00:13:08.954 "data_size": 65536 00:13:08.954 }, 00:13:08.954 { 00:13:08.954 "name": "BaseBdev2", 00:13:08.954 "uuid": "bb852595-6e56-55a2-a611-4f96a9fd4fcf", 00:13:08.954 "is_configured": true, 00:13:08.954 "data_offset": 0, 00:13:08.954 "data_size": 65536 00:13:08.954 } 00:13:08.954 ] 00:13:08.954 }' 00:13:08.954 10:58:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.954 10:58:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:08.954 10:58:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.954 10:58:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.954 10:58:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:09.214 115.83 IOPS, 347.50 MiB/s [2024-11-15T10:58:16.142Z] [2024-11-15 10:58:15.947213] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:09.214 [2024-11-15 10:58:16.061800] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:09.782 [2024-11-15 10:58:16.502846] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:10.041 10:58:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:10.041 10:58:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:10.041 10:58:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.041 10:58:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:10.041 10:58:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:10.041 10:58:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.041 10:58:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.041 10:58:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.041 10:58:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.041 10:58:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.041 10:58:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.041 10:58:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.041 "name": "raid_bdev1", 00:13:10.041 "uuid": "8c487751-65d0-469c-9230-b51ce5a55dec", 00:13:10.041 "strip_size_kb": 0, 00:13:10.041 "state": "online", 00:13:10.041 "raid_level": "raid1", 00:13:10.041 "superblock": false, 00:13:10.041 "num_base_bdevs": 2, 00:13:10.041 "num_base_bdevs_discovered": 2, 00:13:10.041 "num_base_bdevs_operational": 2, 00:13:10.041 "process": { 00:13:10.041 "type": "rebuild", 00:13:10.041 "target": "spare", 00:13:10.041 "progress": { 00:13:10.041 "blocks": 61440, 00:13:10.041 "percent": 93 00:13:10.041 } 00:13:10.041 }, 00:13:10.041 "base_bdevs_list": [ 00:13:10.041 { 00:13:10.041 "name": "spare", 00:13:10.041 "uuid": "f81b672b-8bf8-5285-9e34-894d35d9f24e", 00:13:10.041 "is_configured": true, 00:13:10.041 "data_offset": 0, 00:13:10.041 "data_size": 65536 00:13:10.041 }, 00:13:10.041 { 00:13:10.041 "name": "BaseBdev2", 00:13:10.041 "uuid": "bb852595-6e56-55a2-a611-4f96a9fd4fcf", 00:13:10.041 "is_configured": true, 00:13:10.041 "data_offset": 0, 00:13:10.041 "data_size": 65536 00:13:10.041 } 00:13:10.041 ] 00:13:10.041 }' 00:13:10.041 10:58:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.041 104.43 IOPS, 313.29 MiB/s [2024-11-15T10:58:16.969Z] 10:58:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:10.041 10:58:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.041 10:58:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:10.041 10:58:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:10.041 [2024-11-15 10:58:16.941122] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:10.301 [2024-11-15 10:58:17.041019] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:10.301 [2024-11-15 10:58:17.043349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.240 95.62 IOPS, 286.88 MiB/s [2024-11-15T10:58:18.168Z] 10:58:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:11.240 10:58:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:11.240 10:58:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.240 10:58:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:11.240 10:58:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:11.240 10:58:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.240 10:58:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.240 10:58:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.240 10:58:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.240 10:58:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.240 10:58:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.240 10:58:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.240 "name": "raid_bdev1", 00:13:11.240 "uuid": "8c487751-65d0-469c-9230-b51ce5a55dec", 00:13:11.240 "strip_size_kb": 0, 00:13:11.240 "state": "online", 00:13:11.240 "raid_level": "raid1", 00:13:11.240 "superblock": false, 00:13:11.240 "num_base_bdevs": 2, 00:13:11.240 "num_base_bdevs_discovered": 2, 00:13:11.240 "num_base_bdevs_operational": 2, 00:13:11.240 "base_bdevs_list": [ 00:13:11.240 { 00:13:11.240 "name": "spare", 00:13:11.240 "uuid": "f81b672b-8bf8-5285-9e34-894d35d9f24e", 00:13:11.240 "is_configured": true, 00:13:11.240 "data_offset": 0, 00:13:11.240 "data_size": 65536 00:13:11.240 }, 00:13:11.240 { 00:13:11.240 "name": "BaseBdev2", 00:13:11.240 "uuid": "bb852595-6e56-55a2-a611-4f96a9fd4fcf", 00:13:11.240 "is_configured": true, 00:13:11.240 "data_offset": 0, 00:13:11.240 "data_size": 65536 00:13:11.240 } 00:13:11.240 ] 00:13:11.240 }' 00:13:11.240 10:58:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.240 10:58:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:11.240 10:58:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.240 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:11.240 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:11.240 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:11.240 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.240 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:11.240 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:11.240 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.240 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.240 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.240 10:58:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.240 10:58:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.240 10:58:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.240 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.240 "name": "raid_bdev1", 00:13:11.240 "uuid": "8c487751-65d0-469c-9230-b51ce5a55dec", 00:13:11.240 "strip_size_kb": 0, 00:13:11.240 "state": "online", 00:13:11.240 "raid_level": "raid1", 00:13:11.240 "superblock": false, 00:13:11.240 "num_base_bdevs": 2, 00:13:11.240 "num_base_bdevs_discovered": 2, 00:13:11.240 "num_base_bdevs_operational": 2, 00:13:11.240 "base_bdevs_list": [ 00:13:11.240 { 00:13:11.240 "name": "spare", 00:13:11.240 "uuid": "f81b672b-8bf8-5285-9e34-894d35d9f24e", 00:13:11.240 "is_configured": true, 00:13:11.241 "data_offset": 0, 00:13:11.241 "data_size": 65536 00:13:11.241 }, 00:13:11.241 { 00:13:11.241 "name": "BaseBdev2", 00:13:11.241 "uuid": "bb852595-6e56-55a2-a611-4f96a9fd4fcf", 00:13:11.241 "is_configured": true, 00:13:11.241 "data_offset": 0, 00:13:11.241 "data_size": 65536 00:13:11.241 } 00:13:11.241 ] 00:13:11.241 }' 00:13:11.241 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.241 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:11.241 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.241 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:11.241 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:11.241 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.241 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.241 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.241 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.241 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:11.241 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.241 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.241 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.241 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.241 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.241 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.241 10:58:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.241 10:58:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.500 10:58:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.500 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.500 "name": "raid_bdev1", 00:13:11.500 "uuid": "8c487751-65d0-469c-9230-b51ce5a55dec", 00:13:11.500 "strip_size_kb": 0, 00:13:11.500 "state": "online", 00:13:11.500 "raid_level": "raid1", 00:13:11.500 "superblock": false, 00:13:11.500 "num_base_bdevs": 2, 00:13:11.500 "num_base_bdevs_discovered": 2, 00:13:11.500 "num_base_bdevs_operational": 2, 00:13:11.500 "base_bdevs_list": [ 00:13:11.500 { 00:13:11.500 "name": "spare", 00:13:11.500 "uuid": "f81b672b-8bf8-5285-9e34-894d35d9f24e", 00:13:11.500 "is_configured": true, 00:13:11.500 "data_offset": 0, 00:13:11.500 "data_size": 65536 00:13:11.500 }, 00:13:11.500 { 00:13:11.500 "name": "BaseBdev2", 00:13:11.500 "uuid": "bb852595-6e56-55a2-a611-4f96a9fd4fcf", 00:13:11.500 "is_configured": true, 00:13:11.500 "data_offset": 0, 00:13:11.500 "data_size": 65536 00:13:11.500 } 00:13:11.500 ] 00:13:11.500 }' 00:13:11.500 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.500 10:58:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.760 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:11.760 10:58:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.760 10:58:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.760 [2024-11-15 10:58:18.583872] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:11.760 [2024-11-15 10:58:18.583910] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:11.760 00:13:11.760 Latency(us) 00:13:11.760 [2024-11-15T10:58:18.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:11.760 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:11.760 raid_bdev1 : 8.80 89.70 269.10 0.00 0.00 16167.17 314.80 135536.46 00:13:11.760 [2024-11-15T10:58:18.688Z] =================================================================================================================== 00:13:11.760 [2024-11-15T10:58:18.688Z] Total : 89.70 269.10 0.00 0.00 16167.17 314.80 135536.46 00:13:11.760 [2024-11-15 10:58:18.614927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.760 { 00:13:11.760 "results": [ 00:13:11.760 { 00:13:11.760 "job": "raid_bdev1", 00:13:11.760 "core_mask": "0x1", 00:13:11.760 "workload": "randrw", 00:13:11.760 "percentage": 50, 00:13:11.760 "status": "finished", 00:13:11.760 "queue_depth": 2, 00:13:11.760 "io_size": 3145728, 00:13:11.760 "runtime": 8.796144, 00:13:11.760 "iops": 89.69839511495037, 00:13:11.760 "mibps": 269.09518534485113, 00:13:11.760 "io_failed": 0, 00:13:11.760 "io_timeout": 0, 00:13:11.760 "avg_latency_us": 16167.170276896852, 00:13:11.760 "min_latency_us": 314.80174672489085, 00:13:11.760 "max_latency_us": 135536.46113537118 00:13:11.760 } 00:13:11.760 ], 00:13:11.760 "core_count": 1 00:13:11.760 } 00:13:11.760 [2024-11-15 10:58:18.615042] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:11.760 [2024-11-15 10:58:18.615143] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:11.760 [2024-11-15 10:58:18.615173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:11.760 10:58:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.760 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.760 10:58:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.760 10:58:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.760 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:11.760 10:58:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.760 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:11.760 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:11.760 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:11.760 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:11.760 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:11.760 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:11.760 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:11.760 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:11.760 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:11.760 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:11.760 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:11.760 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:11.760 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:12.020 /dev/nbd0 00:13:12.020 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:12.020 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:12.020 10:58:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:12.020 10:58:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:13:12.020 10:58:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:12.020 10:58:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:12.020 10:58:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:12.020 10:58:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:13:12.020 10:58:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:12.020 10:58:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:12.021 10:58:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:12.021 1+0 records in 00:13:12.021 1+0 records out 00:13:12.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552902 s, 7.4 MB/s 00:13:12.021 10:58:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.021 10:58:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:13:12.021 10:58:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.021 10:58:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:12.021 10:58:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:13:12.021 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:12.021 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:12.021 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:12.021 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:12.021 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:12.021 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:12.021 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:12.021 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:12.021 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:12.021 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:12.021 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:12.021 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:12.021 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:12.021 10:58:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:12.280 /dev/nbd1 00:13:12.280 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:12.280 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:12.281 10:58:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:12.281 10:58:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:13:12.281 10:58:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:12.281 10:58:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:12.281 10:58:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:12.281 10:58:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:13:12.281 10:58:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:12.281 10:58:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:12.281 10:58:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:12.281 1+0 records in 00:13:12.281 1+0 records out 00:13:12.281 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377443 s, 10.9 MB/s 00:13:12.281 10:58:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.281 10:58:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:13:12.281 10:58:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.281 10:58:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:12.281 10:58:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:13:12.281 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:12.281 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:12.281 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:12.540 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:12.540 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:12.540 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:12.540 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:12.540 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:12.540 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.540 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:12.799 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:12.799 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:12.799 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:12.799 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.799 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.799 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:12.799 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:12.799 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.799 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:12.799 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:12.799 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:12.799 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:12.799 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:12.799 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.799 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:13.058 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:13.058 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:13.058 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:13.058 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.058 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.058 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:13.058 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:13.058 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.058 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:13.058 10:58:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76631 00:13:13.058 10:58:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 76631 ']' 00:13:13.058 10:58:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 76631 00:13:13.058 10:58:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:13:13.058 10:58:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:13.058 10:58:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76631 00:13:13.058 killing process with pid 76631 00:13:13.058 Received shutdown signal, test time was about 10.060571 seconds 00:13:13.058 00:13:13.058 Latency(us) 00:13:13.058 [2024-11-15T10:58:19.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:13.058 [2024-11-15T10:58:19.986Z] =================================================================================================================== 00:13:13.058 [2024-11-15T10:58:19.986Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:13.058 10:58:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:13.058 10:58:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:13.058 10:58:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76631' 00:13:13.058 10:58:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 76631 00:13:13.058 [2024-11-15 10:58:19.853083] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:13.058 10:58:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 76631 00:13:13.353 [2024-11-15 10:58:20.085389] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:14.733 10:58:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:14.733 00:13:14.733 real 0m13.499s 00:13:14.733 user 0m16.789s 00:13:14.733 sys 0m1.581s 00:13:14.733 10:58:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:14.733 10:58:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.733 ************************************ 00:13:14.733 END TEST raid_rebuild_test_io 00:13:14.733 ************************************ 00:13:14.733 10:58:21 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:14.733 10:58:21 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:14.733 10:58:21 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:14.733 10:58:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:14.733 ************************************ 00:13:14.733 START TEST raid_rebuild_test_sb_io 00:13:14.733 ************************************ 00:13:14.733 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true true true 00:13:14.733 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:14.733 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:14.733 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:14.733 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:14.733 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:14.733 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:14.733 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.733 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:14.733 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:14.733 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.733 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:14.733 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:14.733 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.733 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:14.733 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:14.733 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:14.733 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:14.733 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:14.733 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:14.733 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:14.733 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:14.733 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:14.733 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:14.733 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:14.734 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77028 00:13:14.734 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77028 00:13:14.734 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:14.734 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 77028 ']' 00:13:14.734 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.734 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:14.734 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.734 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:14.734 10:58:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.734 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:14.734 Zero copy mechanism will not be used. 00:13:14.734 [2024-11-15 10:58:21.547680] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:13:14.734 [2024-11-15 10:58:21.547822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77028 ] 00:13:14.993 [2024-11-15 10:58:21.729575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.993 [2024-11-15 10:58:21.861752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.252 [2024-11-15 10:58:22.100637] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:15.252 [2024-11-15 10:58:22.100687] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.819 BaseBdev1_malloc 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.819 [2024-11-15 10:58:22.529904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:15.819 [2024-11-15 10:58:22.529971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.819 [2024-11-15 10:58:22.529990] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:15.819 [2024-11-15 10:58:22.530001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.819 [2024-11-15 10:58:22.532095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.819 [2024-11-15 10:58:22.532139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:15.819 BaseBdev1 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.819 BaseBdev2_malloc 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.819 [2024-11-15 10:58:22.588582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:15.819 [2024-11-15 10:58:22.588653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.819 [2024-11-15 10:58:22.588676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:15.819 [2024-11-15 10:58:22.588691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.819 [2024-11-15 10:58:22.591107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.819 [2024-11-15 10:58:22.591154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:15.819 BaseBdev2 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.819 spare_malloc 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.819 spare_delay 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.819 [2024-11-15 10:58:22.673791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:15.819 [2024-11-15 10:58:22.673862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.819 [2024-11-15 10:58:22.673888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:15.819 [2024-11-15 10:58:22.673901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.819 [2024-11-15 10:58:22.676391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.819 [2024-11-15 10:58:22.676441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:15.819 spare 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.819 [2024-11-15 10:58:22.685851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.819 [2024-11-15 10:58:22.688009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:15.819 [2024-11-15 10:58:22.688222] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:15.819 [2024-11-15 10:58:22.688243] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:15.819 [2024-11-15 10:58:22.688552] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:15.819 [2024-11-15 10:58:22.688747] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:15.819 [2024-11-15 10:58:22.688758] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:15.819 [2024-11-15 10:58:22.688937] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.819 "name": "raid_bdev1", 00:13:15.819 "uuid": "637ec44f-8092-4223-9dfd-0734c6e5e2bc", 00:13:15.819 "strip_size_kb": 0, 00:13:15.819 "state": "online", 00:13:15.819 "raid_level": "raid1", 00:13:15.819 "superblock": true, 00:13:15.819 "num_base_bdevs": 2, 00:13:15.819 "num_base_bdevs_discovered": 2, 00:13:15.819 "num_base_bdevs_operational": 2, 00:13:15.819 "base_bdevs_list": [ 00:13:15.819 { 00:13:15.819 "name": "BaseBdev1", 00:13:15.819 "uuid": "6c27b949-ccd6-51a6-b946-ac05e52601fe", 00:13:15.819 "is_configured": true, 00:13:15.819 "data_offset": 2048, 00:13:15.819 "data_size": 63488 00:13:15.819 }, 00:13:15.819 { 00:13:15.819 "name": "BaseBdev2", 00:13:15.819 "uuid": "c0643115-b4ba-5c1b-b046-e5b82b7c2bb6", 00:13:15.819 "is_configured": true, 00:13:15.819 "data_offset": 2048, 00:13:15.819 "data_size": 63488 00:13:15.819 } 00:13:15.819 ] 00:13:15.819 }' 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.819 10:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.387 [2024-11-15 10:58:23.137466] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.387 [2024-11-15 10:58:23.216964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.387 "name": "raid_bdev1", 00:13:16.387 "uuid": "637ec44f-8092-4223-9dfd-0734c6e5e2bc", 00:13:16.387 "strip_size_kb": 0, 00:13:16.387 "state": "online", 00:13:16.387 "raid_level": "raid1", 00:13:16.387 "superblock": true, 00:13:16.387 "num_base_bdevs": 2, 00:13:16.387 "num_base_bdevs_discovered": 1, 00:13:16.387 "num_base_bdevs_operational": 1, 00:13:16.387 "base_bdevs_list": [ 00:13:16.387 { 00:13:16.387 "name": null, 00:13:16.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.387 "is_configured": false, 00:13:16.387 "data_offset": 0, 00:13:16.387 "data_size": 63488 00:13:16.387 }, 00:13:16.387 { 00:13:16.387 "name": "BaseBdev2", 00:13:16.387 "uuid": "c0643115-b4ba-5c1b-b046-e5b82b7c2bb6", 00:13:16.387 "is_configured": true, 00:13:16.387 "data_offset": 2048, 00:13:16.387 "data_size": 63488 00:13:16.387 } 00:13:16.387 ] 00:13:16.387 }' 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.387 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.387 [2024-11-15 10:58:23.310064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:16.647 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:16.647 Zero copy mechanism will not be used. 00:13:16.647 Running I/O for 60 seconds... 00:13:16.906 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:16.906 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.906 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.906 [2024-11-15 10:58:23.739412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:16.906 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.906 10:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:16.906 [2024-11-15 10:58:23.801483] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:16.906 [2024-11-15 10:58:23.803800] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:17.165 [2024-11-15 10:58:23.912951] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:17.165 [2024-11-15 10:58:23.913662] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:17.425 [2024-11-15 10:58:24.135262] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:17.425 [2024-11-15 10:58:24.135708] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:17.682 171.00 IOPS, 513.00 MiB/s [2024-11-15T10:58:24.610Z] [2024-11-15 10:58:24.472249] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:17.942 [2024-11-15 10:58:24.698281] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:17.942 [2024-11-15 10:58:24.698799] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:17.942 10:58:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:17.942 10:58:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.942 10:58:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:17.942 10:58:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:17.942 10:58:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.942 10:58:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.942 10:58:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.942 10:58:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.942 10:58:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.942 10:58:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.942 10:58:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.942 "name": "raid_bdev1", 00:13:17.942 "uuid": "637ec44f-8092-4223-9dfd-0734c6e5e2bc", 00:13:17.942 "strip_size_kb": 0, 00:13:17.942 "state": "online", 00:13:17.942 "raid_level": "raid1", 00:13:17.942 "superblock": true, 00:13:17.942 "num_base_bdevs": 2, 00:13:17.942 "num_base_bdevs_discovered": 2, 00:13:17.942 "num_base_bdevs_operational": 2, 00:13:17.942 "process": { 00:13:17.942 "type": "rebuild", 00:13:17.942 "target": "spare", 00:13:17.942 "progress": { 00:13:17.942 "blocks": 10240, 00:13:17.942 "percent": 16 00:13:17.942 } 00:13:17.942 }, 00:13:17.942 "base_bdevs_list": [ 00:13:17.942 { 00:13:17.942 "name": "spare", 00:13:17.942 "uuid": "04cf5566-8b25-598f-a748-ed80be52b7a1", 00:13:17.942 "is_configured": true, 00:13:17.942 "data_offset": 2048, 00:13:17.943 "data_size": 63488 00:13:17.943 }, 00:13:17.943 { 00:13:17.943 "name": "BaseBdev2", 00:13:17.943 "uuid": "c0643115-b4ba-5c1b-b046-e5b82b7c2bb6", 00:13:17.943 "is_configured": true, 00:13:17.943 "data_offset": 2048, 00:13:17.943 "data_size": 63488 00:13:17.943 } 00:13:17.943 ] 00:13:17.943 }' 00:13:17.943 10:58:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.202 10:58:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:18.202 10:58:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.202 10:58:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:18.202 10:58:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:18.202 10:58:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.202 10:58:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.202 [2024-11-15 10:58:24.947705] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:18.202 [2024-11-15 10:58:25.056953] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:18.202 [2024-11-15 10:58:25.066840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.202 [2024-11-15 10:58:25.066963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:18.202 [2024-11-15 10:58:25.066997] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:18.202 [2024-11-15 10:58:25.126757] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:18.461 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.461 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:18.461 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.461 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.461 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.461 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.461 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:18.461 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.461 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.461 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.461 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.461 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.461 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.461 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.461 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.461 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.461 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.461 "name": "raid_bdev1", 00:13:18.461 "uuid": "637ec44f-8092-4223-9dfd-0734c6e5e2bc", 00:13:18.461 "strip_size_kb": 0, 00:13:18.461 "state": "online", 00:13:18.461 "raid_level": "raid1", 00:13:18.461 "superblock": true, 00:13:18.461 "num_base_bdevs": 2, 00:13:18.461 "num_base_bdevs_discovered": 1, 00:13:18.461 "num_base_bdevs_operational": 1, 00:13:18.461 "base_bdevs_list": [ 00:13:18.461 { 00:13:18.461 "name": null, 00:13:18.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.461 "is_configured": false, 00:13:18.461 "data_offset": 0, 00:13:18.461 "data_size": 63488 00:13:18.461 }, 00:13:18.461 { 00:13:18.461 "name": "BaseBdev2", 00:13:18.461 "uuid": "c0643115-b4ba-5c1b-b046-e5b82b7c2bb6", 00:13:18.461 "is_configured": true, 00:13:18.461 "data_offset": 2048, 00:13:18.461 "data_size": 63488 00:13:18.461 } 00:13:18.461 ] 00:13:18.461 }' 00:13:18.461 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.461 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.721 122.00 IOPS, 366.00 MiB/s [2024-11-15T10:58:25.649Z] 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:18.721 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.721 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:18.721 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:18.721 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.721 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.721 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.721 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.721 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.721 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.721 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.721 "name": "raid_bdev1", 00:13:18.721 "uuid": "637ec44f-8092-4223-9dfd-0734c6e5e2bc", 00:13:18.721 "strip_size_kb": 0, 00:13:18.721 "state": "online", 00:13:18.721 "raid_level": "raid1", 00:13:18.721 "superblock": true, 00:13:18.721 "num_base_bdevs": 2, 00:13:18.721 "num_base_bdevs_discovered": 1, 00:13:18.721 "num_base_bdevs_operational": 1, 00:13:18.721 "base_bdevs_list": [ 00:13:18.721 { 00:13:18.721 "name": null, 00:13:18.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.721 "is_configured": false, 00:13:18.721 "data_offset": 0, 00:13:18.721 "data_size": 63488 00:13:18.721 }, 00:13:18.721 { 00:13:18.721 "name": "BaseBdev2", 00:13:18.721 "uuid": "c0643115-b4ba-5c1b-b046-e5b82b7c2bb6", 00:13:18.721 "is_configured": true, 00:13:18.721 "data_offset": 2048, 00:13:18.721 "data_size": 63488 00:13:18.721 } 00:13:18.721 ] 00:13:18.721 }' 00:13:18.980 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.980 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:18.980 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.980 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:18.980 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:18.980 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.980 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.980 [2024-11-15 10:58:25.744521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:18.980 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.980 10:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:18.980 [2024-11-15 10:58:25.782846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:18.980 [2024-11-15 10:58:25.784742] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:18.980 [2024-11-15 10:58:25.897282] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:18.980 [2024-11-15 10:58:25.897858] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:19.239 [2024-11-15 10:58:26.011604] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:19.239 [2024-11-15 10:58:26.012039] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:19.498 141.67 IOPS, 425.00 MiB/s [2024-11-15T10:58:26.426Z] [2024-11-15 10:58:26.349203] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:19.498 [2024-11-15 10:58:26.349665] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:19.757 [2024-11-15 10:58:26.565698] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:20.016 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.016 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.016 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.016 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.016 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.016 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.016 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.016 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.016 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.016 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.016 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.016 "name": "raid_bdev1", 00:13:20.016 "uuid": "637ec44f-8092-4223-9dfd-0734c6e5e2bc", 00:13:20.016 "strip_size_kb": 0, 00:13:20.016 "state": "online", 00:13:20.016 "raid_level": "raid1", 00:13:20.016 "superblock": true, 00:13:20.016 "num_base_bdevs": 2, 00:13:20.016 "num_base_bdevs_discovered": 2, 00:13:20.016 "num_base_bdevs_operational": 2, 00:13:20.016 "process": { 00:13:20.016 "type": "rebuild", 00:13:20.016 "target": "spare", 00:13:20.016 "progress": { 00:13:20.016 "blocks": 12288, 00:13:20.016 "percent": 19 00:13:20.016 } 00:13:20.016 }, 00:13:20.016 "base_bdevs_list": [ 00:13:20.016 { 00:13:20.016 "name": "spare", 00:13:20.016 "uuid": "04cf5566-8b25-598f-a748-ed80be52b7a1", 00:13:20.016 "is_configured": true, 00:13:20.016 "data_offset": 2048, 00:13:20.016 "data_size": 63488 00:13:20.016 }, 00:13:20.016 { 00:13:20.016 "name": "BaseBdev2", 00:13:20.016 "uuid": "c0643115-b4ba-5c1b-b046-e5b82b7c2bb6", 00:13:20.016 "is_configured": true, 00:13:20.016 "data_offset": 2048, 00:13:20.016 "data_size": 63488 00:13:20.016 } 00:13:20.017 ] 00:13:20.017 }' 00:13:20.017 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.017 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.017 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.017 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.017 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:20.017 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:20.017 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:20.017 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:20.017 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:20.017 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:20.017 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=425 00:13:20.017 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:20.017 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.017 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.017 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.017 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.017 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.017 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.017 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.017 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.017 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.276 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.276 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.276 "name": "raid_bdev1", 00:13:20.276 "uuid": "637ec44f-8092-4223-9dfd-0734c6e5e2bc", 00:13:20.276 "strip_size_kb": 0, 00:13:20.276 "state": "online", 00:13:20.276 "raid_level": "raid1", 00:13:20.276 "superblock": true, 00:13:20.276 "num_base_bdevs": 2, 00:13:20.276 "num_base_bdevs_discovered": 2, 00:13:20.276 "num_base_bdevs_operational": 2, 00:13:20.276 "process": { 00:13:20.276 "type": "rebuild", 00:13:20.276 "target": "spare", 00:13:20.276 "progress": { 00:13:20.276 "blocks": 14336, 00:13:20.276 "percent": 22 00:13:20.276 } 00:13:20.276 }, 00:13:20.276 "base_bdevs_list": [ 00:13:20.276 { 00:13:20.276 "name": "spare", 00:13:20.276 "uuid": "04cf5566-8b25-598f-a748-ed80be52b7a1", 00:13:20.276 "is_configured": true, 00:13:20.276 "data_offset": 2048, 00:13:20.276 "data_size": 63488 00:13:20.276 }, 00:13:20.276 { 00:13:20.276 "name": "BaseBdev2", 00:13:20.276 "uuid": "c0643115-b4ba-5c1b-b046-e5b82b7c2bb6", 00:13:20.276 "is_configured": true, 00:13:20.276 "data_offset": 2048, 00:13:20.276 "data_size": 63488 00:13:20.276 } 00:13:20.276 ] 00:13:20.276 }' 00:13:20.276 10:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.276 10:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.276 10:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.276 [2024-11-15 10:58:27.040748] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:20.276 10:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.276 10:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:21.100 130.25 IOPS, 390.75 MiB/s [2024-11-15T10:58:28.028Z] [2024-11-15 10:58:27.859608] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:21.359 10:58:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:21.359 10:58:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.359 10:58:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.359 10:58:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.359 10:58:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.359 10:58:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.359 10:58:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.359 10:58:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.359 10:58:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.359 10:58:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.359 10:58:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.359 10:58:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.359 "name": "raid_bdev1", 00:13:21.359 "uuid": "637ec44f-8092-4223-9dfd-0734c6e5e2bc", 00:13:21.359 "strip_size_kb": 0, 00:13:21.359 "state": "online", 00:13:21.359 "raid_level": "raid1", 00:13:21.359 "superblock": true, 00:13:21.359 "num_base_bdevs": 2, 00:13:21.359 "num_base_bdevs_discovered": 2, 00:13:21.359 "num_base_bdevs_operational": 2, 00:13:21.359 "process": { 00:13:21.359 "type": "rebuild", 00:13:21.359 "target": "spare", 00:13:21.359 "progress": { 00:13:21.359 "blocks": 30720, 00:13:21.359 "percent": 48 00:13:21.359 } 00:13:21.359 }, 00:13:21.359 "base_bdevs_list": [ 00:13:21.359 { 00:13:21.359 "name": "spare", 00:13:21.359 "uuid": "04cf5566-8b25-598f-a748-ed80be52b7a1", 00:13:21.359 "is_configured": true, 00:13:21.359 "data_offset": 2048, 00:13:21.359 "data_size": 63488 00:13:21.359 }, 00:13:21.359 { 00:13:21.359 "name": "BaseBdev2", 00:13:21.359 "uuid": "c0643115-b4ba-5c1b-b046-e5b82b7c2bb6", 00:13:21.359 "is_configured": true, 00:13:21.359 "data_offset": 2048, 00:13:21.359 "data_size": 63488 00:13:21.360 } 00:13:21.360 ] 00:13:21.360 }' 00:13:21.360 10:58:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.360 10:58:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.360 10:58:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.360 10:58:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.360 10:58:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:21.625 117.40 IOPS, 352.20 MiB/s [2024-11-15T10:58:28.553Z] [2024-11-15 10:58:28.418093] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:22.188 [2024-11-15 10:58:29.103346] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:22.448 10:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:22.448 10:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.448 10:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.449 10:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.449 10:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.449 10:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.449 10:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.449 10:58:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.449 10:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.449 10:58:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.449 10:58:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.449 10:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.449 "name": "raid_bdev1", 00:13:22.449 "uuid": "637ec44f-8092-4223-9dfd-0734c6e5e2bc", 00:13:22.449 "strip_size_kb": 0, 00:13:22.449 "state": "online", 00:13:22.449 "raid_level": "raid1", 00:13:22.449 "superblock": true, 00:13:22.449 "num_base_bdevs": 2, 00:13:22.449 "num_base_bdevs_discovered": 2, 00:13:22.449 "num_base_bdevs_operational": 2, 00:13:22.449 "process": { 00:13:22.449 "type": "rebuild", 00:13:22.449 "target": "spare", 00:13:22.449 "progress": { 00:13:22.449 "blocks": 51200, 00:13:22.449 "percent": 80 00:13:22.449 } 00:13:22.449 }, 00:13:22.449 "base_bdevs_list": [ 00:13:22.449 { 00:13:22.449 "name": "spare", 00:13:22.449 "uuid": "04cf5566-8b25-598f-a748-ed80be52b7a1", 00:13:22.449 "is_configured": true, 00:13:22.449 "data_offset": 2048, 00:13:22.449 "data_size": 63488 00:13:22.449 }, 00:13:22.449 { 00:13:22.449 "name": "BaseBdev2", 00:13:22.449 "uuid": "c0643115-b4ba-5c1b-b046-e5b82b7c2bb6", 00:13:22.449 "is_configured": true, 00:13:22.449 "data_offset": 2048, 00:13:22.449 "data_size": 63488 00:13:22.449 } 00:13:22.449 ] 00:13:22.449 }' 00:13:22.449 10:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.449 103.17 IOPS, 309.50 MiB/s [2024-11-15T10:58:29.377Z] 10:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:22.449 10:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.706 10:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:22.707 10:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:22.707 [2024-11-15 10:58:29.552298] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:23.271 [2024-11-15 10:58:29.997574] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:23.271 [2024-11-15 10:58:30.104211] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:23.271 [2024-11-15 10:58:30.106600] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.529 93.57 IOPS, 280.71 MiB/s [2024-11-15T10:58:30.457Z] 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:23.529 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.529 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.529 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.529 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.529 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.529 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.529 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.529 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.529 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.529 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.529 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.529 "name": "raid_bdev1", 00:13:23.529 "uuid": "637ec44f-8092-4223-9dfd-0734c6e5e2bc", 00:13:23.529 "strip_size_kb": 0, 00:13:23.529 "state": "online", 00:13:23.529 "raid_level": "raid1", 00:13:23.529 "superblock": true, 00:13:23.529 "num_base_bdevs": 2, 00:13:23.529 "num_base_bdevs_discovered": 2, 00:13:23.529 "num_base_bdevs_operational": 2, 00:13:23.529 "base_bdevs_list": [ 00:13:23.529 { 00:13:23.529 "name": "spare", 00:13:23.529 "uuid": "04cf5566-8b25-598f-a748-ed80be52b7a1", 00:13:23.529 "is_configured": true, 00:13:23.529 "data_offset": 2048, 00:13:23.529 "data_size": 63488 00:13:23.529 }, 00:13:23.529 { 00:13:23.529 "name": "BaseBdev2", 00:13:23.529 "uuid": "c0643115-b4ba-5c1b-b046-e5b82b7c2bb6", 00:13:23.529 "is_configured": true, 00:13:23.529 "data_offset": 2048, 00:13:23.529 "data_size": 63488 00:13:23.529 } 00:13:23.529 ] 00:13:23.529 }' 00:13:23.529 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.788 "name": "raid_bdev1", 00:13:23.788 "uuid": "637ec44f-8092-4223-9dfd-0734c6e5e2bc", 00:13:23.788 "strip_size_kb": 0, 00:13:23.788 "state": "online", 00:13:23.788 "raid_level": "raid1", 00:13:23.788 "superblock": true, 00:13:23.788 "num_base_bdevs": 2, 00:13:23.788 "num_base_bdevs_discovered": 2, 00:13:23.788 "num_base_bdevs_operational": 2, 00:13:23.788 "base_bdevs_list": [ 00:13:23.788 { 00:13:23.788 "name": "spare", 00:13:23.788 "uuid": "04cf5566-8b25-598f-a748-ed80be52b7a1", 00:13:23.788 "is_configured": true, 00:13:23.788 "data_offset": 2048, 00:13:23.788 "data_size": 63488 00:13:23.788 }, 00:13:23.788 { 00:13:23.788 "name": "BaseBdev2", 00:13:23.788 "uuid": "c0643115-b4ba-5c1b-b046-e5b82b7c2bb6", 00:13:23.788 "is_configured": true, 00:13:23.788 "data_offset": 2048, 00:13:23.788 "data_size": 63488 00:13:23.788 } 00:13:23.788 ] 00:13:23.788 }' 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.788 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.047 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.047 "name": "raid_bdev1", 00:13:24.047 "uuid": "637ec44f-8092-4223-9dfd-0734c6e5e2bc", 00:13:24.047 "strip_size_kb": 0, 00:13:24.048 "state": "online", 00:13:24.048 "raid_level": "raid1", 00:13:24.048 "superblock": true, 00:13:24.048 "num_base_bdevs": 2, 00:13:24.048 "num_base_bdevs_discovered": 2, 00:13:24.048 "num_base_bdevs_operational": 2, 00:13:24.048 "base_bdevs_list": [ 00:13:24.048 { 00:13:24.048 "name": "spare", 00:13:24.048 "uuid": "04cf5566-8b25-598f-a748-ed80be52b7a1", 00:13:24.048 "is_configured": true, 00:13:24.048 "data_offset": 2048, 00:13:24.048 "data_size": 63488 00:13:24.048 }, 00:13:24.048 { 00:13:24.048 "name": "BaseBdev2", 00:13:24.048 "uuid": "c0643115-b4ba-5c1b-b046-e5b82b7c2bb6", 00:13:24.048 "is_configured": true, 00:13:24.048 "data_offset": 2048, 00:13:24.048 "data_size": 63488 00:13:24.048 } 00:13:24.048 ] 00:13:24.048 }' 00:13:24.048 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.048 10:58:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.308 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:24.308 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.308 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.308 [2024-11-15 10:58:31.091765] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:24.308 [2024-11-15 10:58:31.091800] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:24.308 00:13:24.308 Latency(us) 00:13:24.308 [2024-11-15T10:58:31.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.308 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:24.308 raid_bdev1 : 7.89 86.73 260.20 0.00 0.00 15654.87 296.92 114473.36 00:13:24.308 [2024-11-15T10:58:31.236Z] =================================================================================================================== 00:13:24.308 [2024-11-15T10:58:31.236Z] Total : 86.73 260.20 0.00 0.00 15654.87 296.92 114473.36 00:13:24.308 [2024-11-15 10:58:31.209959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.308 { 00:13:24.308 "results": [ 00:13:24.308 { 00:13:24.308 "job": "raid_bdev1", 00:13:24.308 "core_mask": "0x1", 00:13:24.308 "workload": "randrw", 00:13:24.308 "percentage": 50, 00:13:24.308 "status": "finished", 00:13:24.308 "queue_depth": 2, 00:13:24.308 "io_size": 3145728, 00:13:24.308 "runtime": 7.886159, 00:13:24.308 "iops": 86.7342390636557, 00:13:24.308 "mibps": 260.2027171909671, 00:13:24.308 "io_failed": 0, 00:13:24.308 "io_timeout": 0, 00:13:24.308 "avg_latency_us": 15654.866447049208, 00:13:24.308 "min_latency_us": 296.91528384279474, 00:13:24.308 "max_latency_us": 114473.36244541485 00:13:24.308 } 00:13:24.308 ], 00:13:24.308 "core_count": 1 00:13:24.308 } 00:13:24.308 [2024-11-15 10:58:31.210090] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:24.308 [2024-11-15 10:58:31.210193] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:24.308 [2024-11-15 10:58:31.210208] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:24.308 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.308 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.308 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:24.308 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.308 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.568 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.568 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:24.568 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:24.568 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:24.568 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:24.568 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:24.568 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:24.568 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:24.568 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:24.568 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:24.568 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:24.568 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:24.568 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:24.568 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:24.828 /dev/nbd0 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.828 1+0 records in 00:13:24.828 1+0 records out 00:13:24.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354353 s, 11.6 MB/s 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:24.828 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:25.088 /dev/nbd1 00:13:25.088 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:25.088 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:25.088 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:25.088 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:13:25.088 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:25.088 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:25.088 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:25.088 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:13:25.088 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:25.088 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:25.088 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:25.088 1+0 records in 00:13:25.088 1+0 records out 00:13:25.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000531885 s, 7.7 MB/s 00:13:25.088 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.088 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:13:25.088 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.088 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:25.088 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:13:25.088 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:25.088 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:25.088 10:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:25.348 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:25.348 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:25.348 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:25.348 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:25.348 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:25.348 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.348 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:25.608 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:25.608 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:25.608 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:25.608 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.608 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.608 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:25.608 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:25.608 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.608 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:25.609 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:25.609 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:25.609 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:25.609 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:25.609 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.609 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:25.609 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.869 [2024-11-15 10:58:32.565825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:25.869 [2024-11-15 10:58:32.565930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.869 [2024-11-15 10:58:32.565994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:25.869 [2024-11-15 10:58:32.566029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.869 [2024-11-15 10:58:32.568283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.869 [2024-11-15 10:58:32.568379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:25.869 [2024-11-15 10:58:32.568526] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:25.869 [2024-11-15 10:58:32.568609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:25.869 [2024-11-15 10:58:32.568813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:25.869 spare 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.869 [2024-11-15 10:58:32.668772] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:25.869 [2024-11-15 10:58:32.668839] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:25.869 [2024-11-15 10:58:32.669139] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:25.869 [2024-11-15 10:58:32.669353] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:25.869 [2024-11-15 10:58:32.669407] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:25.869 [2024-11-15 10:58:32.669639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.869 "name": "raid_bdev1", 00:13:25.869 "uuid": "637ec44f-8092-4223-9dfd-0734c6e5e2bc", 00:13:25.869 "strip_size_kb": 0, 00:13:25.869 "state": "online", 00:13:25.869 "raid_level": "raid1", 00:13:25.869 "superblock": true, 00:13:25.869 "num_base_bdevs": 2, 00:13:25.869 "num_base_bdevs_discovered": 2, 00:13:25.869 "num_base_bdevs_operational": 2, 00:13:25.869 "base_bdevs_list": [ 00:13:25.869 { 00:13:25.869 "name": "spare", 00:13:25.869 "uuid": "04cf5566-8b25-598f-a748-ed80be52b7a1", 00:13:25.869 "is_configured": true, 00:13:25.869 "data_offset": 2048, 00:13:25.869 "data_size": 63488 00:13:25.869 }, 00:13:25.869 { 00:13:25.869 "name": "BaseBdev2", 00:13:25.869 "uuid": "c0643115-b4ba-5c1b-b046-e5b82b7c2bb6", 00:13:25.869 "is_configured": true, 00:13:25.869 "data_offset": 2048, 00:13:25.869 "data_size": 63488 00:13:25.869 } 00:13:25.869 ] 00:13:25.869 }' 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.869 10:58:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.440 "name": "raid_bdev1", 00:13:26.440 "uuid": "637ec44f-8092-4223-9dfd-0734c6e5e2bc", 00:13:26.440 "strip_size_kb": 0, 00:13:26.440 "state": "online", 00:13:26.440 "raid_level": "raid1", 00:13:26.440 "superblock": true, 00:13:26.440 "num_base_bdevs": 2, 00:13:26.440 "num_base_bdevs_discovered": 2, 00:13:26.440 "num_base_bdevs_operational": 2, 00:13:26.440 "base_bdevs_list": [ 00:13:26.440 { 00:13:26.440 "name": "spare", 00:13:26.440 "uuid": "04cf5566-8b25-598f-a748-ed80be52b7a1", 00:13:26.440 "is_configured": true, 00:13:26.440 "data_offset": 2048, 00:13:26.440 "data_size": 63488 00:13:26.440 }, 00:13:26.440 { 00:13:26.440 "name": "BaseBdev2", 00:13:26.440 "uuid": "c0643115-b4ba-5c1b-b046-e5b82b7c2bb6", 00:13:26.440 "is_configured": true, 00:13:26.440 "data_offset": 2048, 00:13:26.440 "data_size": 63488 00:13:26.440 } 00:13:26.440 ] 00:13:26.440 }' 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.440 [2024-11-15 10:58:33.300779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.440 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.440 "name": "raid_bdev1", 00:13:26.440 "uuid": "637ec44f-8092-4223-9dfd-0734c6e5e2bc", 00:13:26.440 "strip_size_kb": 0, 00:13:26.440 "state": "online", 00:13:26.440 "raid_level": "raid1", 00:13:26.440 "superblock": true, 00:13:26.440 "num_base_bdevs": 2, 00:13:26.441 "num_base_bdevs_discovered": 1, 00:13:26.441 "num_base_bdevs_operational": 1, 00:13:26.441 "base_bdevs_list": [ 00:13:26.441 { 00:13:26.441 "name": null, 00:13:26.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.441 "is_configured": false, 00:13:26.441 "data_offset": 0, 00:13:26.441 "data_size": 63488 00:13:26.441 }, 00:13:26.441 { 00:13:26.441 "name": "BaseBdev2", 00:13:26.441 "uuid": "c0643115-b4ba-5c1b-b046-e5b82b7c2bb6", 00:13:26.441 "is_configured": true, 00:13:26.441 "data_offset": 2048, 00:13:26.441 "data_size": 63488 00:13:26.441 } 00:13:26.441 ] 00:13:26.441 }' 00:13:26.441 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.441 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.019 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:27.019 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.019 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.019 [2024-11-15 10:58:33.708195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:27.019 [2024-11-15 10:58:33.708476] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:27.019 [2024-11-15 10:58:33.708497] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:27.019 [2024-11-15 10:58:33.708542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:27.019 [2024-11-15 10:58:33.724502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:27.019 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.019 10:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:27.019 [2024-11-15 10:58:33.726344] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:27.960 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.960 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.960 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.960 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.960 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.960 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.960 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.960 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.960 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.960 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.960 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.960 "name": "raid_bdev1", 00:13:27.961 "uuid": "637ec44f-8092-4223-9dfd-0734c6e5e2bc", 00:13:27.961 "strip_size_kb": 0, 00:13:27.961 "state": "online", 00:13:27.961 "raid_level": "raid1", 00:13:27.961 "superblock": true, 00:13:27.961 "num_base_bdevs": 2, 00:13:27.961 "num_base_bdevs_discovered": 2, 00:13:27.961 "num_base_bdevs_operational": 2, 00:13:27.961 "process": { 00:13:27.961 "type": "rebuild", 00:13:27.961 "target": "spare", 00:13:27.961 "progress": { 00:13:27.961 "blocks": 20480, 00:13:27.961 "percent": 32 00:13:27.961 } 00:13:27.961 }, 00:13:27.961 "base_bdevs_list": [ 00:13:27.961 { 00:13:27.961 "name": "spare", 00:13:27.961 "uuid": "04cf5566-8b25-598f-a748-ed80be52b7a1", 00:13:27.961 "is_configured": true, 00:13:27.961 "data_offset": 2048, 00:13:27.961 "data_size": 63488 00:13:27.961 }, 00:13:27.961 { 00:13:27.961 "name": "BaseBdev2", 00:13:27.961 "uuid": "c0643115-b4ba-5c1b-b046-e5b82b7c2bb6", 00:13:27.961 "is_configured": true, 00:13:27.961 "data_offset": 2048, 00:13:27.961 "data_size": 63488 00:13:27.961 } 00:13:27.961 ] 00:13:27.961 }' 00:13:27.961 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.961 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.961 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.961 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.961 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:27.961 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.961 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.220 [2024-11-15 10:58:34.890121] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:28.220 [2024-11-15 10:58:34.931688] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:28.220 [2024-11-15 10:58:34.931794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.220 [2024-11-15 10:58:34.931849] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:28.220 [2024-11-15 10:58:34.931870] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:28.220 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.220 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:28.220 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.220 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.220 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.220 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.220 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:28.220 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.220 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.220 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.220 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.220 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.220 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.220 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.220 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.220 10:58:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.220 10:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.220 "name": "raid_bdev1", 00:13:28.220 "uuid": "637ec44f-8092-4223-9dfd-0734c6e5e2bc", 00:13:28.220 "strip_size_kb": 0, 00:13:28.220 "state": "online", 00:13:28.220 "raid_level": "raid1", 00:13:28.220 "superblock": true, 00:13:28.220 "num_base_bdevs": 2, 00:13:28.220 "num_base_bdevs_discovered": 1, 00:13:28.220 "num_base_bdevs_operational": 1, 00:13:28.220 "base_bdevs_list": [ 00:13:28.220 { 00:13:28.220 "name": null, 00:13:28.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.220 "is_configured": false, 00:13:28.220 "data_offset": 0, 00:13:28.220 "data_size": 63488 00:13:28.220 }, 00:13:28.220 { 00:13:28.220 "name": "BaseBdev2", 00:13:28.220 "uuid": "c0643115-b4ba-5c1b-b046-e5b82b7c2bb6", 00:13:28.220 "is_configured": true, 00:13:28.220 "data_offset": 2048, 00:13:28.220 "data_size": 63488 00:13:28.220 } 00:13:28.220 ] 00:13:28.220 }' 00:13:28.220 10:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.220 10:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.787 10:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:28.787 10:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.787 10:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.787 [2024-11-15 10:58:35.413878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:28.787 [2024-11-15 10:58:35.413954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.787 [2024-11-15 10:58:35.413980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:28.787 [2024-11-15 10:58:35.413989] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.787 [2024-11-15 10:58:35.414480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.787 [2024-11-15 10:58:35.414503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:28.787 [2024-11-15 10:58:35.414608] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:28.787 [2024-11-15 10:58:35.414622] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:28.787 [2024-11-15 10:58:35.414634] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:28.787 [2024-11-15 10:58:35.414654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:28.787 [2024-11-15 10:58:35.431237] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:28.787 spare 00:13:28.787 10:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.787 10:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:28.787 [2024-11-15 10:58:35.433179] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:29.726 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:29.726 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.726 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:29.726 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:29.726 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.726 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.726 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.726 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.726 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.726 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.726 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.726 "name": "raid_bdev1", 00:13:29.726 "uuid": "637ec44f-8092-4223-9dfd-0734c6e5e2bc", 00:13:29.726 "strip_size_kb": 0, 00:13:29.726 "state": "online", 00:13:29.726 "raid_level": "raid1", 00:13:29.726 "superblock": true, 00:13:29.726 "num_base_bdevs": 2, 00:13:29.726 "num_base_bdevs_discovered": 2, 00:13:29.726 "num_base_bdevs_operational": 2, 00:13:29.726 "process": { 00:13:29.726 "type": "rebuild", 00:13:29.726 "target": "spare", 00:13:29.726 "progress": { 00:13:29.726 "blocks": 20480, 00:13:29.726 "percent": 32 00:13:29.726 } 00:13:29.726 }, 00:13:29.726 "base_bdevs_list": [ 00:13:29.726 { 00:13:29.726 "name": "spare", 00:13:29.726 "uuid": "04cf5566-8b25-598f-a748-ed80be52b7a1", 00:13:29.726 "is_configured": true, 00:13:29.726 "data_offset": 2048, 00:13:29.726 "data_size": 63488 00:13:29.726 }, 00:13:29.726 { 00:13:29.726 "name": "BaseBdev2", 00:13:29.726 "uuid": "c0643115-b4ba-5c1b-b046-e5b82b7c2bb6", 00:13:29.726 "is_configured": true, 00:13:29.726 "data_offset": 2048, 00:13:29.726 "data_size": 63488 00:13:29.726 } 00:13:29.726 ] 00:13:29.726 }' 00:13:29.726 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.726 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:29.726 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.726 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:29.726 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:29.726 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.726 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.726 [2024-11-15 10:58:36.592614] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:29.726 [2024-11-15 10:58:36.638855] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:29.726 [2024-11-15 10:58:36.638951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.726 [2024-11-15 10:58:36.638967] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:29.726 [2024-11-15 10:58:36.638977] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:30.009 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.009 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:30.009 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.009 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.009 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.009 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.009 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:30.009 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.009 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.009 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.009 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.009 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.009 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.009 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.009 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.009 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.009 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.009 "name": "raid_bdev1", 00:13:30.009 "uuid": "637ec44f-8092-4223-9dfd-0734c6e5e2bc", 00:13:30.009 "strip_size_kb": 0, 00:13:30.009 "state": "online", 00:13:30.009 "raid_level": "raid1", 00:13:30.009 "superblock": true, 00:13:30.009 "num_base_bdevs": 2, 00:13:30.009 "num_base_bdevs_discovered": 1, 00:13:30.009 "num_base_bdevs_operational": 1, 00:13:30.009 "base_bdevs_list": [ 00:13:30.009 { 00:13:30.009 "name": null, 00:13:30.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.009 "is_configured": false, 00:13:30.009 "data_offset": 0, 00:13:30.009 "data_size": 63488 00:13:30.009 }, 00:13:30.009 { 00:13:30.009 "name": "BaseBdev2", 00:13:30.009 "uuid": "c0643115-b4ba-5c1b-b046-e5b82b7c2bb6", 00:13:30.009 "is_configured": true, 00:13:30.009 "data_offset": 2048, 00:13:30.009 "data_size": 63488 00:13:30.009 } 00:13:30.009 ] 00:13:30.009 }' 00:13:30.009 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.009 10:58:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.273 10:58:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:30.273 10:58:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.273 10:58:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:30.273 10:58:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:30.273 10:58:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.273 10:58:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.273 10:58:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.273 10:58:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.273 10:58:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.273 10:58:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.273 10:58:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.273 "name": "raid_bdev1", 00:13:30.273 "uuid": "637ec44f-8092-4223-9dfd-0734c6e5e2bc", 00:13:30.273 "strip_size_kb": 0, 00:13:30.273 "state": "online", 00:13:30.273 "raid_level": "raid1", 00:13:30.273 "superblock": true, 00:13:30.273 "num_base_bdevs": 2, 00:13:30.273 "num_base_bdevs_discovered": 1, 00:13:30.273 "num_base_bdevs_operational": 1, 00:13:30.273 "base_bdevs_list": [ 00:13:30.273 { 00:13:30.273 "name": null, 00:13:30.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.273 "is_configured": false, 00:13:30.273 "data_offset": 0, 00:13:30.273 "data_size": 63488 00:13:30.273 }, 00:13:30.273 { 00:13:30.273 "name": "BaseBdev2", 00:13:30.273 "uuid": "c0643115-b4ba-5c1b-b046-e5b82b7c2bb6", 00:13:30.273 "is_configured": true, 00:13:30.273 "data_offset": 2048, 00:13:30.273 "data_size": 63488 00:13:30.273 } 00:13:30.273 ] 00:13:30.273 }' 00:13:30.273 10:58:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.273 10:58:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:30.273 10:58:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.532 10:58:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:30.532 10:58:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:30.532 10:58:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.532 10:58:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.532 10:58:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.532 10:58:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:30.532 10:58:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.532 10:58:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.532 [2024-11-15 10:58:37.239620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:30.532 [2024-11-15 10:58:37.239682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.532 [2024-11-15 10:58:37.239702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:30.532 [2024-11-15 10:58:37.239714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.532 [2024-11-15 10:58:37.240173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.532 [2024-11-15 10:58:37.240197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:30.532 [2024-11-15 10:58:37.240279] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:30.532 [2024-11-15 10:58:37.240310] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:30.532 [2024-11-15 10:58:37.240319] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:30.532 [2024-11-15 10:58:37.240331] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:30.532 BaseBdev1 00:13:30.532 10:58:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.532 10:58:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:31.471 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:31.471 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.471 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.471 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.471 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.471 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:31.471 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.471 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.471 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.471 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.471 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.471 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.471 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.471 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.471 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.471 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.471 "name": "raid_bdev1", 00:13:31.471 "uuid": "637ec44f-8092-4223-9dfd-0734c6e5e2bc", 00:13:31.471 "strip_size_kb": 0, 00:13:31.471 "state": "online", 00:13:31.471 "raid_level": "raid1", 00:13:31.471 "superblock": true, 00:13:31.471 "num_base_bdevs": 2, 00:13:31.471 "num_base_bdevs_discovered": 1, 00:13:31.471 "num_base_bdevs_operational": 1, 00:13:31.471 "base_bdevs_list": [ 00:13:31.471 { 00:13:31.471 "name": null, 00:13:31.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.471 "is_configured": false, 00:13:31.471 "data_offset": 0, 00:13:31.471 "data_size": 63488 00:13:31.471 }, 00:13:31.471 { 00:13:31.471 "name": "BaseBdev2", 00:13:31.471 "uuid": "c0643115-b4ba-5c1b-b046-e5b82b7c2bb6", 00:13:31.471 "is_configured": true, 00:13:31.471 "data_offset": 2048, 00:13:31.471 "data_size": 63488 00:13:31.471 } 00:13:31.471 ] 00:13:31.471 }' 00:13:31.471 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.471 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.731 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:31.731 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.731 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:31.731 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:31.731 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.731 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.731 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.731 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.731 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.991 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.991 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.991 "name": "raid_bdev1", 00:13:31.991 "uuid": "637ec44f-8092-4223-9dfd-0734c6e5e2bc", 00:13:31.991 "strip_size_kb": 0, 00:13:31.991 "state": "online", 00:13:31.991 "raid_level": "raid1", 00:13:31.991 "superblock": true, 00:13:31.991 "num_base_bdevs": 2, 00:13:31.991 "num_base_bdevs_discovered": 1, 00:13:31.991 "num_base_bdevs_operational": 1, 00:13:31.991 "base_bdevs_list": [ 00:13:31.991 { 00:13:31.991 "name": null, 00:13:31.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.991 "is_configured": false, 00:13:31.991 "data_offset": 0, 00:13:31.991 "data_size": 63488 00:13:31.991 }, 00:13:31.991 { 00:13:31.991 "name": "BaseBdev2", 00:13:31.991 "uuid": "c0643115-b4ba-5c1b-b046-e5b82b7c2bb6", 00:13:31.991 "is_configured": true, 00:13:31.991 "data_offset": 2048, 00:13:31.991 "data_size": 63488 00:13:31.991 } 00:13:31.991 ] 00:13:31.991 }' 00:13:31.991 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.991 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:31.991 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.991 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:31.991 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:31.991 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:13:31.991 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:31.991 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:31.991 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:31.991 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:31.991 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:31.991 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:31.991 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.991 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.991 [2024-11-15 10:58:38.797211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:31.991 [2024-11-15 10:58:38.797484] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:31.991 [2024-11-15 10:58:38.797557] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:31.991 request: 00:13:31.991 { 00:13:31.991 "base_bdev": "BaseBdev1", 00:13:31.991 "raid_bdev": "raid_bdev1", 00:13:31.991 "method": "bdev_raid_add_base_bdev", 00:13:31.991 "req_id": 1 00:13:31.991 } 00:13:31.991 Got JSON-RPC error response 00:13:31.991 response: 00:13:31.991 { 00:13:31.991 "code": -22, 00:13:31.991 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:31.991 } 00:13:31.991 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:31.991 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:13:31.991 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:31.991 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:31.991 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:31.991 10:58:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:32.930 10:58:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:32.930 10:58:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.930 10:58:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.930 10:58:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.930 10:58:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.930 10:58:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:32.930 10:58:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.930 10:58:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.930 10:58:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.931 10:58:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.931 10:58:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.931 10:58:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.931 10:58:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.931 10:58:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.931 10:58:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.190 10:58:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.190 "name": "raid_bdev1", 00:13:33.190 "uuid": "637ec44f-8092-4223-9dfd-0734c6e5e2bc", 00:13:33.190 "strip_size_kb": 0, 00:13:33.190 "state": "online", 00:13:33.190 "raid_level": "raid1", 00:13:33.190 "superblock": true, 00:13:33.190 "num_base_bdevs": 2, 00:13:33.190 "num_base_bdevs_discovered": 1, 00:13:33.190 "num_base_bdevs_operational": 1, 00:13:33.190 "base_bdevs_list": [ 00:13:33.190 { 00:13:33.190 "name": null, 00:13:33.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.190 "is_configured": false, 00:13:33.190 "data_offset": 0, 00:13:33.190 "data_size": 63488 00:13:33.190 }, 00:13:33.190 { 00:13:33.190 "name": "BaseBdev2", 00:13:33.190 "uuid": "c0643115-b4ba-5c1b-b046-e5b82b7c2bb6", 00:13:33.190 "is_configured": true, 00:13:33.190 "data_offset": 2048, 00:13:33.190 "data_size": 63488 00:13:33.190 } 00:13:33.191 ] 00:13:33.191 }' 00:13:33.191 10:58:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.191 10:58:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.451 10:58:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:33.451 10:58:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.451 10:58:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:33.451 10:58:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:33.451 10:58:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.451 10:58:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.451 10:58:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.451 10:58:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.451 10:58:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.451 10:58:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.451 10:58:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.451 "name": "raid_bdev1", 00:13:33.451 "uuid": "637ec44f-8092-4223-9dfd-0734c6e5e2bc", 00:13:33.451 "strip_size_kb": 0, 00:13:33.451 "state": "online", 00:13:33.451 "raid_level": "raid1", 00:13:33.451 "superblock": true, 00:13:33.451 "num_base_bdevs": 2, 00:13:33.451 "num_base_bdevs_discovered": 1, 00:13:33.451 "num_base_bdevs_operational": 1, 00:13:33.451 "base_bdevs_list": [ 00:13:33.451 { 00:13:33.451 "name": null, 00:13:33.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.451 "is_configured": false, 00:13:33.451 "data_offset": 0, 00:13:33.451 "data_size": 63488 00:13:33.451 }, 00:13:33.451 { 00:13:33.451 "name": "BaseBdev2", 00:13:33.451 "uuid": "c0643115-b4ba-5c1b-b046-e5b82b7c2bb6", 00:13:33.451 "is_configured": true, 00:13:33.451 "data_offset": 2048, 00:13:33.451 "data_size": 63488 00:13:33.451 } 00:13:33.451 ] 00:13:33.451 }' 00:13:33.451 10:58:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.711 10:58:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:33.711 10:58:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.711 10:58:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:33.711 10:58:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77028 00:13:33.711 10:58:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 77028 ']' 00:13:33.711 10:58:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 77028 00:13:33.711 10:58:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:13:33.711 10:58:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:33.711 10:58:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77028 00:13:33.711 killing process with pid 77028 00:13:33.711 Received shutdown signal, test time was about 17.174188 seconds 00:13:33.711 00:13:33.711 Latency(us) 00:13:33.711 [2024-11-15T10:58:40.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.711 [2024-11-15T10:58:40.639Z] =================================================================================================================== 00:13:33.711 [2024-11-15T10:58:40.639Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:33.711 10:58:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:33.711 10:58:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:33.711 10:58:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77028' 00:13:33.711 10:58:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 77028 00:13:33.711 [2024-11-15 10:58:40.453739] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:33.711 10:58:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 77028 00:13:33.711 [2024-11-15 10:58:40.453893] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:33.711 [2024-11-15 10:58:40.453948] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:33.711 [2024-11-15 10:58:40.453958] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:33.970 [2024-11-15 10:58:40.691565] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:35.351 00:13:35.351 real 0m20.402s 00:13:35.351 user 0m26.817s 00:13:35.351 sys 0m2.210s 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:35.351 ************************************ 00:13:35.351 END TEST raid_rebuild_test_sb_io 00:13:35.351 ************************************ 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.351 10:58:41 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:35.351 10:58:41 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:35.351 10:58:41 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:35.351 10:58:41 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:35.351 10:58:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:35.351 ************************************ 00:13:35.351 START TEST raid_rebuild_test 00:13:35.351 ************************************ 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false false true 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77717 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77717 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 77717 ']' 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:35.351 10:58:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.351 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:35.351 Zero copy mechanism will not be used. 00:13:35.351 [2024-11-15 10:58:42.009380] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:13:35.351 [2024-11-15 10:58:42.009495] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77717 ] 00:13:35.351 [2024-11-15 10:58:42.182343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.611 [2024-11-15 10:58:42.295905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.611 [2024-11-15 10:58:42.489521] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:35.611 [2024-11-15 10:58:42.489558] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:36.180 10:58:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:36.180 10:58:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:13:36.180 10:58:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:36.180 10:58:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:36.180 10:58:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.180 10:58:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.180 BaseBdev1_malloc 00:13:36.180 10:58:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.180 10:58:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:36.180 10:58:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.180 10:58:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.180 [2024-11-15 10:58:42.927741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:36.180 [2024-11-15 10:58:42.927822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.180 [2024-11-15 10:58:42.927844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:36.180 [2024-11-15 10:58:42.927855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.180 [2024-11-15 10:58:42.929909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.180 [2024-11-15 10:58:42.929960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:36.180 BaseBdev1 00:13:36.180 10:58:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.180 10:58:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:36.180 10:58:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:36.180 10:58:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.180 10:58:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.180 BaseBdev2_malloc 00:13:36.181 10:58:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.181 10:58:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:36.181 10:58:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.181 10:58:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.181 [2024-11-15 10:58:42.980756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:36.181 [2024-11-15 10:58:42.980818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.181 [2024-11-15 10:58:42.980836] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:36.181 [2024-11-15 10:58:42.980846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.181 [2024-11-15 10:58:42.982843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.181 [2024-11-15 10:58:42.982933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:36.181 BaseBdev2 00:13:36.181 10:58:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.181 10:58:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:36.181 10:58:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:36.181 10:58:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.181 10:58:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.181 BaseBdev3_malloc 00:13:36.181 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.181 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:36.181 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.181 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.181 [2024-11-15 10:58:43.048923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:36.181 [2024-11-15 10:58:43.049030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.181 [2024-11-15 10:58:43.049055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:36.181 [2024-11-15 10:58:43.049065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.181 [2024-11-15 10:58:43.051163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.181 [2024-11-15 10:58:43.051238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:36.181 BaseBdev3 00:13:36.181 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.181 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:36.181 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:36.181 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.181 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.181 BaseBdev4_malloc 00:13:36.181 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.181 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:36.181 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.181 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.181 [2024-11-15 10:58:43.102620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:36.181 [2024-11-15 10:58:43.102677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.181 [2024-11-15 10:58:43.102711] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:36.181 [2024-11-15 10:58:43.102721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.181 [2024-11-15 10:58:43.104770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.440 [2024-11-15 10:58:43.104863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:36.440 BaseBdev4 00:13:36.440 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.440 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:36.440 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.440 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.440 spare_malloc 00:13:36.440 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.440 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:36.440 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.440 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.440 spare_delay 00:13:36.440 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.440 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:36.440 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.440 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.440 [2024-11-15 10:58:43.160506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:36.440 [2024-11-15 10:58:43.160564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.441 [2024-11-15 10:58:43.160599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:36.441 [2024-11-15 10:58:43.160610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.441 [2024-11-15 10:58:43.162671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.441 [2024-11-15 10:58:43.162708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:36.441 spare 00:13:36.441 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.441 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:36.441 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.441 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.441 [2024-11-15 10:58:43.168546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:36.441 [2024-11-15 10:58:43.170328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:36.441 [2024-11-15 10:58:43.170396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:36.441 [2024-11-15 10:58:43.170449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:36.441 [2024-11-15 10:58:43.170525] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:36.441 [2024-11-15 10:58:43.170539] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:36.441 [2024-11-15 10:58:43.170786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:36.441 [2024-11-15 10:58:43.170955] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:36.441 [2024-11-15 10:58:43.170967] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:36.441 [2024-11-15 10:58:43.171122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.441 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.441 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:36.441 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.441 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.441 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.441 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.441 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:36.441 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.441 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.441 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.441 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.441 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.441 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.441 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.441 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.441 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.441 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.441 "name": "raid_bdev1", 00:13:36.441 "uuid": "1c924ce3-62cb-47e1-9ad4-6ccffa4baa6b", 00:13:36.441 "strip_size_kb": 0, 00:13:36.441 "state": "online", 00:13:36.441 "raid_level": "raid1", 00:13:36.441 "superblock": false, 00:13:36.441 "num_base_bdevs": 4, 00:13:36.441 "num_base_bdevs_discovered": 4, 00:13:36.441 "num_base_bdevs_operational": 4, 00:13:36.441 "base_bdevs_list": [ 00:13:36.441 { 00:13:36.441 "name": "BaseBdev1", 00:13:36.441 "uuid": "e5fe063c-886d-5cb7-8d64-475146c7c702", 00:13:36.441 "is_configured": true, 00:13:36.441 "data_offset": 0, 00:13:36.441 "data_size": 65536 00:13:36.441 }, 00:13:36.441 { 00:13:36.441 "name": "BaseBdev2", 00:13:36.441 "uuid": "53080c05-699f-5466-9790-3a3568163248", 00:13:36.441 "is_configured": true, 00:13:36.441 "data_offset": 0, 00:13:36.441 "data_size": 65536 00:13:36.441 }, 00:13:36.441 { 00:13:36.441 "name": "BaseBdev3", 00:13:36.441 "uuid": "a62b2392-5abe-51b8-adae-95436451d11c", 00:13:36.441 "is_configured": true, 00:13:36.441 "data_offset": 0, 00:13:36.441 "data_size": 65536 00:13:36.441 }, 00:13:36.441 { 00:13:36.441 "name": "BaseBdev4", 00:13:36.441 "uuid": "b0b657b6-355b-5b4c-89da-d4d7cf1b6f55", 00:13:36.441 "is_configured": true, 00:13:36.441 "data_offset": 0, 00:13:36.441 "data_size": 65536 00:13:36.441 } 00:13:36.441 ] 00:13:36.441 }' 00:13:36.441 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.441 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.701 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:36.701 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:36.701 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.701 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.701 [2024-11-15 10:58:43.612149] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:36.961 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.961 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:36.961 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.961 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.961 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.961 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:36.961 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.961 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:36.961 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:36.961 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:36.961 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:36.961 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:36.961 10:58:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:36.961 10:58:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:36.961 10:58:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:36.961 10:58:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:36.961 10:58:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:36.961 10:58:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:36.961 10:58:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:36.961 10:58:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:36.961 10:58:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:36.961 [2024-11-15 10:58:43.871471] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:37.221 /dev/nbd0 00:13:37.221 10:58:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:37.221 10:58:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:37.221 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:37.221 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:13:37.221 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:37.221 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:37.221 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:37.221 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:13:37.221 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:37.221 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:37.221 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:37.221 1+0 records in 00:13:37.221 1+0 records out 00:13:37.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441015 s, 9.3 MB/s 00:13:37.221 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.221 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:13:37.221 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.221 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:37.221 10:58:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:13:37.221 10:58:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:37.221 10:58:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:37.221 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:37.221 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:37.221 10:58:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:43.815 65536+0 records in 00:13:43.815 65536+0 records out 00:13:43.815 33554432 bytes (34 MB, 32 MiB) copied, 6.02723 s, 5.6 MB/s 00:13:43.815 10:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:43.815 10:58:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.815 10:58:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:43.815 10:58:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:43.815 10:58:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:43.815 10:58:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.815 10:58:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:43.815 [2024-11-15 10:58:50.178364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.815 [2024-11-15 10:58:50.214396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.815 "name": "raid_bdev1", 00:13:43.815 "uuid": "1c924ce3-62cb-47e1-9ad4-6ccffa4baa6b", 00:13:43.815 "strip_size_kb": 0, 00:13:43.815 "state": "online", 00:13:43.815 "raid_level": "raid1", 00:13:43.815 "superblock": false, 00:13:43.815 "num_base_bdevs": 4, 00:13:43.815 "num_base_bdevs_discovered": 3, 00:13:43.815 "num_base_bdevs_operational": 3, 00:13:43.815 "base_bdevs_list": [ 00:13:43.815 { 00:13:43.815 "name": null, 00:13:43.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.815 "is_configured": false, 00:13:43.815 "data_offset": 0, 00:13:43.815 "data_size": 65536 00:13:43.815 }, 00:13:43.815 { 00:13:43.815 "name": "BaseBdev2", 00:13:43.815 "uuid": "53080c05-699f-5466-9790-3a3568163248", 00:13:43.815 "is_configured": true, 00:13:43.815 "data_offset": 0, 00:13:43.815 "data_size": 65536 00:13:43.815 }, 00:13:43.815 { 00:13:43.815 "name": "BaseBdev3", 00:13:43.815 "uuid": "a62b2392-5abe-51b8-adae-95436451d11c", 00:13:43.815 "is_configured": true, 00:13:43.815 "data_offset": 0, 00:13:43.815 "data_size": 65536 00:13:43.815 }, 00:13:43.815 { 00:13:43.815 "name": "BaseBdev4", 00:13:43.815 "uuid": "b0b657b6-355b-5b4c-89da-d4d7cf1b6f55", 00:13:43.815 "is_configured": true, 00:13:43.815 "data_offset": 0, 00:13:43.815 "data_size": 65536 00:13:43.815 } 00:13:43.815 ] 00:13:43.815 }' 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.815 [2024-11-15 10:58:50.669622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:43.815 [2024-11-15 10:58:50.686315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.815 10:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:43.816 [2024-11-15 10:58:50.688588] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:44.778 10:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:44.778 10:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.778 10:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:44.778 10:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:44.778 10:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.778 10:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.778 10:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.778 10:58:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.778 10:58:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.038 10:58:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.038 10:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.038 "name": "raid_bdev1", 00:13:45.038 "uuid": "1c924ce3-62cb-47e1-9ad4-6ccffa4baa6b", 00:13:45.038 "strip_size_kb": 0, 00:13:45.038 "state": "online", 00:13:45.038 "raid_level": "raid1", 00:13:45.038 "superblock": false, 00:13:45.038 "num_base_bdevs": 4, 00:13:45.038 "num_base_bdevs_discovered": 4, 00:13:45.038 "num_base_bdevs_operational": 4, 00:13:45.038 "process": { 00:13:45.038 "type": "rebuild", 00:13:45.038 "target": "spare", 00:13:45.038 "progress": { 00:13:45.038 "blocks": 20480, 00:13:45.038 "percent": 31 00:13:45.038 } 00:13:45.038 }, 00:13:45.038 "base_bdevs_list": [ 00:13:45.038 { 00:13:45.038 "name": "spare", 00:13:45.038 "uuid": "2486fd0a-fdb6-5ff6-9551-1af3f86d4ee5", 00:13:45.038 "is_configured": true, 00:13:45.038 "data_offset": 0, 00:13:45.038 "data_size": 65536 00:13:45.038 }, 00:13:45.038 { 00:13:45.038 "name": "BaseBdev2", 00:13:45.038 "uuid": "53080c05-699f-5466-9790-3a3568163248", 00:13:45.038 "is_configured": true, 00:13:45.038 "data_offset": 0, 00:13:45.038 "data_size": 65536 00:13:45.038 }, 00:13:45.038 { 00:13:45.038 "name": "BaseBdev3", 00:13:45.038 "uuid": "a62b2392-5abe-51b8-adae-95436451d11c", 00:13:45.038 "is_configured": true, 00:13:45.038 "data_offset": 0, 00:13:45.038 "data_size": 65536 00:13:45.038 }, 00:13:45.038 { 00:13:45.038 "name": "BaseBdev4", 00:13:45.038 "uuid": "b0b657b6-355b-5b4c-89da-d4d7cf1b6f55", 00:13:45.038 "is_configured": true, 00:13:45.038 "data_offset": 0, 00:13:45.038 "data_size": 65536 00:13:45.038 } 00:13:45.038 ] 00:13:45.038 }' 00:13:45.038 10:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.038 10:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.038 10:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.038 10:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.038 10:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:45.038 10:58:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.038 10:58:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.038 [2024-11-15 10:58:51.848883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:45.038 [2024-11-15 10:58:51.898668] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:45.038 [2024-11-15 10:58:51.898764] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.038 [2024-11-15 10:58:51.898782] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:45.038 [2024-11-15 10:58:51.898793] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:45.038 10:58:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.038 10:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:45.038 10:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.038 10:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.038 10:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.038 10:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.038 10:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.038 10:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.038 10:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.038 10:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.038 10:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.038 10:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.038 10:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.038 10:58:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.038 10:58:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.039 10:58:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.298 10:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.298 "name": "raid_bdev1", 00:13:45.298 "uuid": "1c924ce3-62cb-47e1-9ad4-6ccffa4baa6b", 00:13:45.298 "strip_size_kb": 0, 00:13:45.298 "state": "online", 00:13:45.298 "raid_level": "raid1", 00:13:45.298 "superblock": false, 00:13:45.298 "num_base_bdevs": 4, 00:13:45.298 "num_base_bdevs_discovered": 3, 00:13:45.298 "num_base_bdevs_operational": 3, 00:13:45.298 "base_bdevs_list": [ 00:13:45.298 { 00:13:45.298 "name": null, 00:13:45.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.298 "is_configured": false, 00:13:45.298 "data_offset": 0, 00:13:45.298 "data_size": 65536 00:13:45.298 }, 00:13:45.298 { 00:13:45.298 "name": "BaseBdev2", 00:13:45.298 "uuid": "53080c05-699f-5466-9790-3a3568163248", 00:13:45.298 "is_configured": true, 00:13:45.298 "data_offset": 0, 00:13:45.298 "data_size": 65536 00:13:45.298 }, 00:13:45.298 { 00:13:45.298 "name": "BaseBdev3", 00:13:45.298 "uuid": "a62b2392-5abe-51b8-adae-95436451d11c", 00:13:45.298 "is_configured": true, 00:13:45.298 "data_offset": 0, 00:13:45.298 "data_size": 65536 00:13:45.298 }, 00:13:45.298 { 00:13:45.298 "name": "BaseBdev4", 00:13:45.298 "uuid": "b0b657b6-355b-5b4c-89da-d4d7cf1b6f55", 00:13:45.298 "is_configured": true, 00:13:45.298 "data_offset": 0, 00:13:45.298 "data_size": 65536 00:13:45.298 } 00:13:45.298 ] 00:13:45.298 }' 00:13:45.298 10:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.298 10:58:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.558 10:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:45.558 10:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.558 10:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:45.558 10:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:45.558 10:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.558 10:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.558 10:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.558 10:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.558 10:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.558 10:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.558 10:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.558 "name": "raid_bdev1", 00:13:45.558 "uuid": "1c924ce3-62cb-47e1-9ad4-6ccffa4baa6b", 00:13:45.558 "strip_size_kb": 0, 00:13:45.558 "state": "online", 00:13:45.558 "raid_level": "raid1", 00:13:45.558 "superblock": false, 00:13:45.558 "num_base_bdevs": 4, 00:13:45.558 "num_base_bdevs_discovered": 3, 00:13:45.558 "num_base_bdevs_operational": 3, 00:13:45.558 "base_bdevs_list": [ 00:13:45.558 { 00:13:45.558 "name": null, 00:13:45.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.558 "is_configured": false, 00:13:45.558 "data_offset": 0, 00:13:45.558 "data_size": 65536 00:13:45.558 }, 00:13:45.558 { 00:13:45.558 "name": "BaseBdev2", 00:13:45.558 "uuid": "53080c05-699f-5466-9790-3a3568163248", 00:13:45.558 "is_configured": true, 00:13:45.558 "data_offset": 0, 00:13:45.558 "data_size": 65536 00:13:45.558 }, 00:13:45.558 { 00:13:45.558 "name": "BaseBdev3", 00:13:45.558 "uuid": "a62b2392-5abe-51b8-adae-95436451d11c", 00:13:45.558 "is_configured": true, 00:13:45.558 "data_offset": 0, 00:13:45.558 "data_size": 65536 00:13:45.558 }, 00:13:45.558 { 00:13:45.558 "name": "BaseBdev4", 00:13:45.558 "uuid": "b0b657b6-355b-5b4c-89da-d4d7cf1b6f55", 00:13:45.558 "is_configured": true, 00:13:45.558 "data_offset": 0, 00:13:45.558 "data_size": 65536 00:13:45.558 } 00:13:45.558 ] 00:13:45.558 }' 00:13:45.558 10:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.558 10:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:45.558 10:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.818 10:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:45.818 10:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:45.818 10:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.818 10:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.818 [2024-11-15 10:58:52.506801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:45.818 [2024-11-15 10:58:52.521498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:13:45.818 10:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.818 10:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:45.818 [2024-11-15 10:58:52.523738] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:46.759 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:46.759 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.759 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:46.759 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:46.759 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.759 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.759 10:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.759 10:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.759 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.759 10:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.759 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.759 "name": "raid_bdev1", 00:13:46.759 "uuid": "1c924ce3-62cb-47e1-9ad4-6ccffa4baa6b", 00:13:46.759 "strip_size_kb": 0, 00:13:46.759 "state": "online", 00:13:46.759 "raid_level": "raid1", 00:13:46.759 "superblock": false, 00:13:46.759 "num_base_bdevs": 4, 00:13:46.759 "num_base_bdevs_discovered": 4, 00:13:46.759 "num_base_bdevs_operational": 4, 00:13:46.759 "process": { 00:13:46.759 "type": "rebuild", 00:13:46.759 "target": "spare", 00:13:46.759 "progress": { 00:13:46.759 "blocks": 20480, 00:13:46.759 "percent": 31 00:13:46.759 } 00:13:46.759 }, 00:13:46.759 "base_bdevs_list": [ 00:13:46.759 { 00:13:46.759 "name": "spare", 00:13:46.759 "uuid": "2486fd0a-fdb6-5ff6-9551-1af3f86d4ee5", 00:13:46.759 "is_configured": true, 00:13:46.759 "data_offset": 0, 00:13:46.759 "data_size": 65536 00:13:46.759 }, 00:13:46.759 { 00:13:46.759 "name": "BaseBdev2", 00:13:46.759 "uuid": "53080c05-699f-5466-9790-3a3568163248", 00:13:46.759 "is_configured": true, 00:13:46.759 "data_offset": 0, 00:13:46.759 "data_size": 65536 00:13:46.759 }, 00:13:46.759 { 00:13:46.759 "name": "BaseBdev3", 00:13:46.759 "uuid": "a62b2392-5abe-51b8-adae-95436451d11c", 00:13:46.759 "is_configured": true, 00:13:46.759 "data_offset": 0, 00:13:46.759 "data_size": 65536 00:13:46.759 }, 00:13:46.759 { 00:13:46.759 "name": "BaseBdev4", 00:13:46.759 "uuid": "b0b657b6-355b-5b4c-89da-d4d7cf1b6f55", 00:13:46.759 "is_configured": true, 00:13:46.759 "data_offset": 0, 00:13:46.760 "data_size": 65536 00:13:46.760 } 00:13:46.760 ] 00:13:46.760 }' 00:13:46.760 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.760 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:46.760 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.760 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:46.760 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:46.760 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:46.760 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:46.760 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:46.760 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:46.760 10:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.760 10:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.760 [2024-11-15 10:58:53.675550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:47.020 [2024-11-15 10:58:53.733267] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.020 "name": "raid_bdev1", 00:13:47.020 "uuid": "1c924ce3-62cb-47e1-9ad4-6ccffa4baa6b", 00:13:47.020 "strip_size_kb": 0, 00:13:47.020 "state": "online", 00:13:47.020 "raid_level": "raid1", 00:13:47.020 "superblock": false, 00:13:47.020 "num_base_bdevs": 4, 00:13:47.020 "num_base_bdevs_discovered": 3, 00:13:47.020 "num_base_bdevs_operational": 3, 00:13:47.020 "process": { 00:13:47.020 "type": "rebuild", 00:13:47.020 "target": "spare", 00:13:47.020 "progress": { 00:13:47.020 "blocks": 24576, 00:13:47.020 "percent": 37 00:13:47.020 } 00:13:47.020 }, 00:13:47.020 "base_bdevs_list": [ 00:13:47.020 { 00:13:47.020 "name": "spare", 00:13:47.020 "uuid": "2486fd0a-fdb6-5ff6-9551-1af3f86d4ee5", 00:13:47.020 "is_configured": true, 00:13:47.020 "data_offset": 0, 00:13:47.020 "data_size": 65536 00:13:47.020 }, 00:13:47.020 { 00:13:47.020 "name": null, 00:13:47.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.020 "is_configured": false, 00:13:47.020 "data_offset": 0, 00:13:47.020 "data_size": 65536 00:13:47.020 }, 00:13:47.020 { 00:13:47.020 "name": "BaseBdev3", 00:13:47.020 "uuid": "a62b2392-5abe-51b8-adae-95436451d11c", 00:13:47.020 "is_configured": true, 00:13:47.020 "data_offset": 0, 00:13:47.020 "data_size": 65536 00:13:47.020 }, 00:13:47.020 { 00:13:47.020 "name": "BaseBdev4", 00:13:47.020 "uuid": "b0b657b6-355b-5b4c-89da-d4d7cf1b6f55", 00:13:47.020 "is_configured": true, 00:13:47.020 "data_offset": 0, 00:13:47.020 "data_size": 65536 00:13:47.020 } 00:13:47.020 ] 00:13:47.020 }' 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=452 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.020 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.020 "name": "raid_bdev1", 00:13:47.020 "uuid": "1c924ce3-62cb-47e1-9ad4-6ccffa4baa6b", 00:13:47.020 "strip_size_kb": 0, 00:13:47.020 "state": "online", 00:13:47.020 "raid_level": "raid1", 00:13:47.020 "superblock": false, 00:13:47.020 "num_base_bdevs": 4, 00:13:47.020 "num_base_bdevs_discovered": 3, 00:13:47.020 "num_base_bdevs_operational": 3, 00:13:47.020 "process": { 00:13:47.020 "type": "rebuild", 00:13:47.020 "target": "spare", 00:13:47.020 "progress": { 00:13:47.020 "blocks": 26624, 00:13:47.020 "percent": 40 00:13:47.020 } 00:13:47.020 }, 00:13:47.020 "base_bdevs_list": [ 00:13:47.020 { 00:13:47.020 "name": "spare", 00:13:47.020 "uuid": "2486fd0a-fdb6-5ff6-9551-1af3f86d4ee5", 00:13:47.020 "is_configured": true, 00:13:47.020 "data_offset": 0, 00:13:47.020 "data_size": 65536 00:13:47.020 }, 00:13:47.020 { 00:13:47.020 "name": null, 00:13:47.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.020 "is_configured": false, 00:13:47.020 "data_offset": 0, 00:13:47.020 "data_size": 65536 00:13:47.020 }, 00:13:47.020 { 00:13:47.020 "name": "BaseBdev3", 00:13:47.020 "uuid": "a62b2392-5abe-51b8-adae-95436451d11c", 00:13:47.020 "is_configured": true, 00:13:47.020 "data_offset": 0, 00:13:47.020 "data_size": 65536 00:13:47.020 }, 00:13:47.020 { 00:13:47.020 "name": "BaseBdev4", 00:13:47.020 "uuid": "b0b657b6-355b-5b4c-89da-d4d7cf1b6f55", 00:13:47.020 "is_configured": true, 00:13:47.020 "data_offset": 0, 00:13:47.020 "data_size": 65536 00:13:47.020 } 00:13:47.021 ] 00:13:47.021 }' 00:13:47.021 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.021 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.021 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.280 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.280 10:58:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:48.218 10:58:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:48.218 10:58:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:48.218 10:58:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.218 10:58:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:48.218 10:58:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:48.218 10:58:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.218 10:58:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.218 10:58:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.218 10:58:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.218 10:58:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.218 10:58:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.218 10:58:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.218 "name": "raid_bdev1", 00:13:48.218 "uuid": "1c924ce3-62cb-47e1-9ad4-6ccffa4baa6b", 00:13:48.218 "strip_size_kb": 0, 00:13:48.218 "state": "online", 00:13:48.218 "raid_level": "raid1", 00:13:48.218 "superblock": false, 00:13:48.218 "num_base_bdevs": 4, 00:13:48.218 "num_base_bdevs_discovered": 3, 00:13:48.218 "num_base_bdevs_operational": 3, 00:13:48.218 "process": { 00:13:48.218 "type": "rebuild", 00:13:48.218 "target": "spare", 00:13:48.218 "progress": { 00:13:48.218 "blocks": 49152, 00:13:48.218 "percent": 75 00:13:48.218 } 00:13:48.218 }, 00:13:48.218 "base_bdevs_list": [ 00:13:48.218 { 00:13:48.218 "name": "spare", 00:13:48.218 "uuid": "2486fd0a-fdb6-5ff6-9551-1af3f86d4ee5", 00:13:48.218 "is_configured": true, 00:13:48.218 "data_offset": 0, 00:13:48.218 "data_size": 65536 00:13:48.218 }, 00:13:48.218 { 00:13:48.218 "name": null, 00:13:48.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.218 "is_configured": false, 00:13:48.218 "data_offset": 0, 00:13:48.218 "data_size": 65536 00:13:48.218 }, 00:13:48.218 { 00:13:48.218 "name": "BaseBdev3", 00:13:48.218 "uuid": "a62b2392-5abe-51b8-adae-95436451d11c", 00:13:48.218 "is_configured": true, 00:13:48.218 "data_offset": 0, 00:13:48.218 "data_size": 65536 00:13:48.219 }, 00:13:48.219 { 00:13:48.219 "name": "BaseBdev4", 00:13:48.219 "uuid": "b0b657b6-355b-5b4c-89da-d4d7cf1b6f55", 00:13:48.219 "is_configured": true, 00:13:48.219 "data_offset": 0, 00:13:48.219 "data_size": 65536 00:13:48.219 } 00:13:48.219 ] 00:13:48.219 }' 00:13:48.219 10:58:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.219 10:58:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:48.219 10:58:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.478 10:58:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:48.478 10:58:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:49.047 [2024-11-15 10:58:55.750022] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:49.047 [2024-11-15 10:58:55.750141] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:49.047 [2024-11-15 10:58:55.750207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.307 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:49.307 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.307 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.307 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.307 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.307 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.307 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.307 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.307 10:58:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.307 10:58:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.307 10:58:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.307 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.307 "name": "raid_bdev1", 00:13:49.307 "uuid": "1c924ce3-62cb-47e1-9ad4-6ccffa4baa6b", 00:13:49.307 "strip_size_kb": 0, 00:13:49.307 "state": "online", 00:13:49.307 "raid_level": "raid1", 00:13:49.307 "superblock": false, 00:13:49.307 "num_base_bdevs": 4, 00:13:49.307 "num_base_bdevs_discovered": 3, 00:13:49.307 "num_base_bdevs_operational": 3, 00:13:49.307 "base_bdevs_list": [ 00:13:49.307 { 00:13:49.307 "name": "spare", 00:13:49.307 "uuid": "2486fd0a-fdb6-5ff6-9551-1af3f86d4ee5", 00:13:49.307 "is_configured": true, 00:13:49.307 "data_offset": 0, 00:13:49.307 "data_size": 65536 00:13:49.307 }, 00:13:49.307 { 00:13:49.307 "name": null, 00:13:49.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.307 "is_configured": false, 00:13:49.307 "data_offset": 0, 00:13:49.307 "data_size": 65536 00:13:49.307 }, 00:13:49.307 { 00:13:49.307 "name": "BaseBdev3", 00:13:49.307 "uuid": "a62b2392-5abe-51b8-adae-95436451d11c", 00:13:49.307 "is_configured": true, 00:13:49.307 "data_offset": 0, 00:13:49.307 "data_size": 65536 00:13:49.307 }, 00:13:49.307 { 00:13:49.307 "name": "BaseBdev4", 00:13:49.307 "uuid": "b0b657b6-355b-5b4c-89da-d4d7cf1b6f55", 00:13:49.307 "is_configured": true, 00:13:49.307 "data_offset": 0, 00:13:49.307 "data_size": 65536 00:13:49.307 } 00:13:49.307 ] 00:13:49.307 }' 00:13:49.307 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.567 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:49.567 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.567 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:49.567 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:49.567 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:49.567 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.567 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:49.567 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:49.567 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.567 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.567 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.567 10:58:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.567 10:58:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.567 10:58:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.567 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.567 "name": "raid_bdev1", 00:13:49.567 "uuid": "1c924ce3-62cb-47e1-9ad4-6ccffa4baa6b", 00:13:49.567 "strip_size_kb": 0, 00:13:49.567 "state": "online", 00:13:49.567 "raid_level": "raid1", 00:13:49.567 "superblock": false, 00:13:49.567 "num_base_bdevs": 4, 00:13:49.567 "num_base_bdevs_discovered": 3, 00:13:49.567 "num_base_bdevs_operational": 3, 00:13:49.567 "base_bdevs_list": [ 00:13:49.567 { 00:13:49.567 "name": "spare", 00:13:49.567 "uuid": "2486fd0a-fdb6-5ff6-9551-1af3f86d4ee5", 00:13:49.567 "is_configured": true, 00:13:49.567 "data_offset": 0, 00:13:49.567 "data_size": 65536 00:13:49.567 }, 00:13:49.567 { 00:13:49.567 "name": null, 00:13:49.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.567 "is_configured": false, 00:13:49.567 "data_offset": 0, 00:13:49.567 "data_size": 65536 00:13:49.567 }, 00:13:49.567 { 00:13:49.567 "name": "BaseBdev3", 00:13:49.567 "uuid": "a62b2392-5abe-51b8-adae-95436451d11c", 00:13:49.567 "is_configured": true, 00:13:49.567 "data_offset": 0, 00:13:49.567 "data_size": 65536 00:13:49.567 }, 00:13:49.567 { 00:13:49.567 "name": "BaseBdev4", 00:13:49.567 "uuid": "b0b657b6-355b-5b4c-89da-d4d7cf1b6f55", 00:13:49.567 "is_configured": true, 00:13:49.567 "data_offset": 0, 00:13:49.567 "data_size": 65536 00:13:49.567 } 00:13:49.567 ] 00:13:49.567 }' 00:13:49.567 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.567 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:49.567 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.567 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:49.567 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:49.567 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.567 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.567 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.567 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.567 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:49.568 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.568 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.568 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.568 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.568 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.568 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.568 10:58:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.568 10:58:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.568 10:58:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.568 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.568 "name": "raid_bdev1", 00:13:49.568 "uuid": "1c924ce3-62cb-47e1-9ad4-6ccffa4baa6b", 00:13:49.568 "strip_size_kb": 0, 00:13:49.568 "state": "online", 00:13:49.568 "raid_level": "raid1", 00:13:49.568 "superblock": false, 00:13:49.568 "num_base_bdevs": 4, 00:13:49.568 "num_base_bdevs_discovered": 3, 00:13:49.568 "num_base_bdevs_operational": 3, 00:13:49.568 "base_bdevs_list": [ 00:13:49.568 { 00:13:49.568 "name": "spare", 00:13:49.568 "uuid": "2486fd0a-fdb6-5ff6-9551-1af3f86d4ee5", 00:13:49.568 "is_configured": true, 00:13:49.568 "data_offset": 0, 00:13:49.568 "data_size": 65536 00:13:49.568 }, 00:13:49.568 { 00:13:49.568 "name": null, 00:13:49.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.568 "is_configured": false, 00:13:49.568 "data_offset": 0, 00:13:49.568 "data_size": 65536 00:13:49.568 }, 00:13:49.568 { 00:13:49.568 "name": "BaseBdev3", 00:13:49.568 "uuid": "a62b2392-5abe-51b8-adae-95436451d11c", 00:13:49.568 "is_configured": true, 00:13:49.568 "data_offset": 0, 00:13:49.568 "data_size": 65536 00:13:49.568 }, 00:13:49.568 { 00:13:49.568 "name": "BaseBdev4", 00:13:49.568 "uuid": "b0b657b6-355b-5b4c-89da-d4d7cf1b6f55", 00:13:49.568 "is_configured": true, 00:13:49.568 "data_offset": 0, 00:13:49.568 "data_size": 65536 00:13:49.568 } 00:13:49.568 ] 00:13:49.568 }' 00:13:49.568 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.568 10:58:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.137 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:50.137 10:58:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.137 10:58:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.137 [2024-11-15 10:58:56.824683] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:50.137 [2024-11-15 10:58:56.824817] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:50.137 [2024-11-15 10:58:56.824946] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:50.137 [2024-11-15 10:58:56.825072] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:50.137 [2024-11-15 10:58:56.825121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:50.137 10:58:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.137 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.137 10:58:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.137 10:58:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.137 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:50.137 10:58:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.137 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:50.137 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:50.137 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:50.137 10:58:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:50.137 10:58:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.137 10:58:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:50.137 10:58:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:50.137 10:58:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:50.137 10:58:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:50.137 10:58:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:50.137 10:58:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:50.137 10:58:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:50.137 10:58:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:50.397 /dev/nbd0 00:13:50.397 10:58:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:50.397 10:58:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:50.397 10:58:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:50.397 10:58:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:13:50.397 10:58:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:50.397 10:58:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:50.397 10:58:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:50.397 10:58:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:13:50.397 10:58:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:50.397 10:58:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:50.397 10:58:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.397 1+0 records in 00:13:50.397 1+0 records out 00:13:50.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054805 s, 7.5 MB/s 00:13:50.398 10:58:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.398 10:58:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:13:50.398 10:58:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.398 10:58:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:50.398 10:58:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:13:50.398 10:58:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:50.398 10:58:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:50.398 10:58:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:50.657 /dev/nbd1 00:13:50.657 10:58:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:50.657 10:58:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:50.657 10:58:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:50.657 10:58:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:13:50.657 10:58:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:50.657 10:58:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:50.657 10:58:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:50.657 10:58:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:13:50.657 10:58:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:50.657 10:58:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:50.657 10:58:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.657 1+0 records in 00:13:50.657 1+0 records out 00:13:50.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000438413 s, 9.3 MB/s 00:13:50.657 10:58:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.657 10:58:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:13:50.657 10:58:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.657 10:58:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:50.657 10:58:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:13:50.657 10:58:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:50.657 10:58:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:50.657 10:58:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:50.917 10:58:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:50.917 10:58:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.917 10:58:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:50.917 10:58:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:50.917 10:58:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:50.917 10:58:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:50.917 10:58:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:50.917 10:58:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:50.917 10:58:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:50.917 10:58:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:50.917 10:58:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:50.917 10:58:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:50.917 10:58:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:50.917 10:58:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:50.917 10:58:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:50.917 10:58:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:50.917 10:58:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:51.176 10:58:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:51.176 10:58:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:51.176 10:58:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:51.176 10:58:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:51.176 10:58:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:51.176 10:58:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:51.176 10:58:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:51.176 10:58:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:51.176 10:58:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:51.176 10:58:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77717 00:13:51.176 10:58:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 77717 ']' 00:13:51.176 10:58:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 77717 00:13:51.176 10:58:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:13:51.176 10:58:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:51.176 10:58:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77717 00:13:51.176 10:58:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:51.176 10:58:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:51.176 10:58:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77717' 00:13:51.176 killing process with pid 77717 00:13:51.176 10:58:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 77717 00:13:51.176 Received shutdown signal, test time was about 60.000000 seconds 00:13:51.176 00:13:51.176 Latency(us) 00:13:51.176 [2024-11-15T10:58:58.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.176 [2024-11-15T10:58:58.104Z] =================================================================================================================== 00:13:51.176 [2024-11-15T10:58:58.104Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:51.176 [2024-11-15 10:58:58.080867] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:51.176 10:58:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 77717 00:13:51.745 [2024-11-15 10:58:58.569812] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:53.127 ************************************ 00:13:53.127 END TEST raid_rebuild_test 00:13:53.127 ************************************ 00:13:53.127 00:13:53.127 real 0m17.753s 00:13:53.127 user 0m19.427s 00:13:53.127 sys 0m3.287s 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.127 10:58:59 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:53.127 10:58:59 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:53.127 10:58:59 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:53.127 10:58:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:53.127 ************************************ 00:13:53.127 START TEST raid_rebuild_test_sb 00:13:53.127 ************************************ 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true false true 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78163 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78163 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 78163 ']' 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:53.127 10:58:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.127 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:53.127 Zero copy mechanism will not be used. 00:13:53.127 [2024-11-15 10:58:59.837107] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:13:53.127 [2024-11-15 10:58:59.837252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78163 ] 00:13:53.127 [2024-11-15 10:59:00.012360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.401 [2024-11-15 10:59:00.130161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.677 [2024-11-15 10:59:00.341307] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:53.678 [2024-11-15 10:59:00.341353] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:53.937 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:53.937 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:13:53.937 10:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:53.937 10:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:53.937 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.937 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.937 BaseBdev1_malloc 00:13:53.937 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.937 10:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:53.937 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.937 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.937 [2024-11-15 10:59:00.708569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:53.937 [2024-11-15 10:59:00.708638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.937 [2024-11-15 10:59:00.708661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:53.937 [2024-11-15 10:59:00.708673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.937 [2024-11-15 10:59:00.710754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.937 [2024-11-15 10:59:00.710874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:53.937 BaseBdev1 00:13:53.937 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.937 10:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:53.937 10:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:53.937 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.937 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.937 BaseBdev2_malloc 00:13:53.938 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.938 10:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:53.938 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.938 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.938 [2024-11-15 10:59:00.766819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:53.938 [2024-11-15 10:59:00.766903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.938 [2024-11-15 10:59:00.766925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:53.938 [2024-11-15 10:59:00.766940] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.938 [2024-11-15 10:59:00.769249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.938 [2024-11-15 10:59:00.769350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:53.938 BaseBdev2 00:13:53.938 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.938 10:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:53.938 10:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:53.938 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.938 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.938 BaseBdev3_malloc 00:13:53.938 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.938 10:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:53.938 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.938 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.938 [2024-11-15 10:59:00.833943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:53.938 [2024-11-15 10:59:00.834007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.938 [2024-11-15 10:59:00.834030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:53.938 [2024-11-15 10:59:00.834042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.938 [2024-11-15 10:59:00.836147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.938 [2024-11-15 10:59:00.836192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:53.938 BaseBdev3 00:13:53.938 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.938 10:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:53.938 10:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:53.938 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.938 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.198 BaseBdev4_malloc 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.198 [2024-11-15 10:59:00.889490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:54.198 [2024-11-15 10:59:00.889594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.198 [2024-11-15 10:59:00.889617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:54.198 [2024-11-15 10:59:00.889628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.198 [2024-11-15 10:59:00.891994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.198 [2024-11-15 10:59:00.892040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:54.198 BaseBdev4 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.198 spare_malloc 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.198 spare_delay 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.198 [2024-11-15 10:59:00.958533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:54.198 [2024-11-15 10:59:00.958593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.198 [2024-11-15 10:59:00.958612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:54.198 [2024-11-15 10:59:00.958623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.198 [2024-11-15 10:59:00.960683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.198 [2024-11-15 10:59:00.960727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:54.198 spare 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.198 [2024-11-15 10:59:00.970596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:54.198 [2024-11-15 10:59:00.972602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:54.198 [2024-11-15 10:59:00.972682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:54.198 [2024-11-15 10:59:00.972742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:54.198 [2024-11-15 10:59:00.972949] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:54.198 [2024-11-15 10:59:00.972969] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:54.198 [2024-11-15 10:59:00.973243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:54.198 [2024-11-15 10:59:00.973464] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:54.198 [2024-11-15 10:59:00.973477] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:54.198 [2024-11-15 10:59:00.973668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.198 10:59:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.198 10:59:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.198 10:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.198 "name": "raid_bdev1", 00:13:54.198 "uuid": "bafaee23-57ed-446f-9e37-f623f7aa1c84", 00:13:54.198 "strip_size_kb": 0, 00:13:54.198 "state": "online", 00:13:54.198 "raid_level": "raid1", 00:13:54.198 "superblock": true, 00:13:54.198 "num_base_bdevs": 4, 00:13:54.198 "num_base_bdevs_discovered": 4, 00:13:54.198 "num_base_bdevs_operational": 4, 00:13:54.198 "base_bdevs_list": [ 00:13:54.198 { 00:13:54.198 "name": "BaseBdev1", 00:13:54.198 "uuid": "238ba464-a124-58f3-816a-1a95d17779fe", 00:13:54.198 "is_configured": true, 00:13:54.198 "data_offset": 2048, 00:13:54.198 "data_size": 63488 00:13:54.198 }, 00:13:54.198 { 00:13:54.198 "name": "BaseBdev2", 00:13:54.198 "uuid": "0ac2a018-9c02-5d34-8cbc-83a218add6d0", 00:13:54.198 "is_configured": true, 00:13:54.198 "data_offset": 2048, 00:13:54.198 "data_size": 63488 00:13:54.198 }, 00:13:54.198 { 00:13:54.198 "name": "BaseBdev3", 00:13:54.198 "uuid": "3cfdeddb-a708-5615-85e0-d0b499bd37a6", 00:13:54.198 "is_configured": true, 00:13:54.198 "data_offset": 2048, 00:13:54.198 "data_size": 63488 00:13:54.198 }, 00:13:54.198 { 00:13:54.198 "name": "BaseBdev4", 00:13:54.198 "uuid": "902fdf0b-bce2-5dd8-a172-c320811bc981", 00:13:54.198 "is_configured": true, 00:13:54.198 "data_offset": 2048, 00:13:54.198 "data_size": 63488 00:13:54.198 } 00:13:54.198 ] 00:13:54.198 }' 00:13:54.198 10:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.198 10:59:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.766 10:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:54.766 10:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:54.766 10:59:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.766 10:59:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.766 [2024-11-15 10:59:01.414170] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:54.766 10:59:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.766 10:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:54.766 10:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.766 10:59:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.766 10:59:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.766 10:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:54.766 10:59:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.766 10:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:54.766 10:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:54.766 10:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:54.766 10:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:54.766 10:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:54.766 10:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:54.766 10:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:54.766 10:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:54.766 10:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:54.766 10:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:54.766 10:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:54.766 10:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:54.766 10:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:54.767 10:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:55.025 [2024-11-15 10:59:01.709403] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:55.025 /dev/nbd0 00:13:55.025 10:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:55.025 10:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:55.025 10:59:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:55.025 10:59:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:13:55.025 10:59:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:55.025 10:59:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:55.025 10:59:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:55.025 10:59:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:13:55.025 10:59:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:55.025 10:59:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:55.025 10:59:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:55.025 1+0 records in 00:13:55.025 1+0 records out 00:13:55.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570699 s, 7.2 MB/s 00:13:55.025 10:59:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.025 10:59:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:13:55.025 10:59:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.025 10:59:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:55.025 10:59:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:13:55.025 10:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:55.025 10:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:55.025 10:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:55.025 10:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:55.025 10:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:01.643 63488+0 records in 00:14:01.643 63488+0 records out 00:14:01.643 32505856 bytes (33 MB, 31 MiB) copied, 5.57724 s, 5.8 MB/s 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:01.643 [2024-11-15 10:59:07.562755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.643 [2024-11-15 10:59:07.602799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.643 "name": "raid_bdev1", 00:14:01.643 "uuid": "bafaee23-57ed-446f-9e37-f623f7aa1c84", 00:14:01.643 "strip_size_kb": 0, 00:14:01.643 "state": "online", 00:14:01.643 "raid_level": "raid1", 00:14:01.643 "superblock": true, 00:14:01.643 "num_base_bdevs": 4, 00:14:01.643 "num_base_bdevs_discovered": 3, 00:14:01.643 "num_base_bdevs_operational": 3, 00:14:01.643 "base_bdevs_list": [ 00:14:01.643 { 00:14:01.643 "name": null, 00:14:01.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.643 "is_configured": false, 00:14:01.643 "data_offset": 0, 00:14:01.643 "data_size": 63488 00:14:01.643 }, 00:14:01.643 { 00:14:01.643 "name": "BaseBdev2", 00:14:01.643 "uuid": "0ac2a018-9c02-5d34-8cbc-83a218add6d0", 00:14:01.643 "is_configured": true, 00:14:01.643 "data_offset": 2048, 00:14:01.643 "data_size": 63488 00:14:01.643 }, 00:14:01.643 { 00:14:01.643 "name": "BaseBdev3", 00:14:01.643 "uuid": "3cfdeddb-a708-5615-85e0-d0b499bd37a6", 00:14:01.643 "is_configured": true, 00:14:01.643 "data_offset": 2048, 00:14:01.643 "data_size": 63488 00:14:01.643 }, 00:14:01.643 { 00:14:01.643 "name": "BaseBdev4", 00:14:01.643 "uuid": "902fdf0b-bce2-5dd8-a172-c320811bc981", 00:14:01.643 "is_configured": true, 00:14:01.643 "data_offset": 2048, 00:14:01.643 "data_size": 63488 00:14:01.643 } 00:14:01.643 ] 00:14:01.643 }' 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.643 10:59:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.643 10:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:01.643 10:59:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.643 10:59:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.643 [2024-11-15 10:59:08.030070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:01.644 [2024-11-15 10:59:08.045862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:14:01.644 10:59:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.644 [2024-11-15 10:59:08.047813] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:01.644 10:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:02.213 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.213 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.213 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.213 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.213 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.213 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.213 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.213 10:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.213 10:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.213 10:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.213 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.213 "name": "raid_bdev1", 00:14:02.213 "uuid": "bafaee23-57ed-446f-9e37-f623f7aa1c84", 00:14:02.213 "strip_size_kb": 0, 00:14:02.213 "state": "online", 00:14:02.213 "raid_level": "raid1", 00:14:02.213 "superblock": true, 00:14:02.213 "num_base_bdevs": 4, 00:14:02.213 "num_base_bdevs_discovered": 4, 00:14:02.213 "num_base_bdevs_operational": 4, 00:14:02.213 "process": { 00:14:02.213 "type": "rebuild", 00:14:02.213 "target": "spare", 00:14:02.213 "progress": { 00:14:02.213 "blocks": 20480, 00:14:02.213 "percent": 32 00:14:02.213 } 00:14:02.213 }, 00:14:02.213 "base_bdevs_list": [ 00:14:02.213 { 00:14:02.213 "name": "spare", 00:14:02.213 "uuid": "b8b220c3-90a5-5500-8237-df325bc7e247", 00:14:02.213 "is_configured": true, 00:14:02.213 "data_offset": 2048, 00:14:02.213 "data_size": 63488 00:14:02.213 }, 00:14:02.213 { 00:14:02.213 "name": "BaseBdev2", 00:14:02.213 "uuid": "0ac2a018-9c02-5d34-8cbc-83a218add6d0", 00:14:02.213 "is_configured": true, 00:14:02.213 "data_offset": 2048, 00:14:02.213 "data_size": 63488 00:14:02.213 }, 00:14:02.214 { 00:14:02.214 "name": "BaseBdev3", 00:14:02.214 "uuid": "3cfdeddb-a708-5615-85e0-d0b499bd37a6", 00:14:02.214 "is_configured": true, 00:14:02.214 "data_offset": 2048, 00:14:02.214 "data_size": 63488 00:14:02.214 }, 00:14:02.214 { 00:14:02.214 "name": "BaseBdev4", 00:14:02.214 "uuid": "902fdf0b-bce2-5dd8-a172-c320811bc981", 00:14:02.214 "is_configured": true, 00:14:02.214 "data_offset": 2048, 00:14:02.214 "data_size": 63488 00:14:02.214 } 00:14:02.214 ] 00:14:02.214 }' 00:14:02.214 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.473 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.473 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.473 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.473 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:02.473 10:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.473 10:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.473 [2024-11-15 10:59:09.215378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.473 [2024-11-15 10:59:09.253038] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:02.473 [2024-11-15 10:59:09.253132] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.473 [2024-11-15 10:59:09.253151] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.473 [2024-11-15 10:59:09.253163] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:02.473 10:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.473 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:02.473 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.473 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.473 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.473 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.473 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.473 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.473 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.473 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.473 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.473 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.473 10:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.473 10:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.473 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.473 10:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.473 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.473 "name": "raid_bdev1", 00:14:02.473 "uuid": "bafaee23-57ed-446f-9e37-f623f7aa1c84", 00:14:02.473 "strip_size_kb": 0, 00:14:02.473 "state": "online", 00:14:02.473 "raid_level": "raid1", 00:14:02.473 "superblock": true, 00:14:02.473 "num_base_bdevs": 4, 00:14:02.473 "num_base_bdevs_discovered": 3, 00:14:02.473 "num_base_bdevs_operational": 3, 00:14:02.473 "base_bdevs_list": [ 00:14:02.473 { 00:14:02.473 "name": null, 00:14:02.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.473 "is_configured": false, 00:14:02.473 "data_offset": 0, 00:14:02.473 "data_size": 63488 00:14:02.473 }, 00:14:02.473 { 00:14:02.473 "name": "BaseBdev2", 00:14:02.473 "uuid": "0ac2a018-9c02-5d34-8cbc-83a218add6d0", 00:14:02.473 "is_configured": true, 00:14:02.473 "data_offset": 2048, 00:14:02.473 "data_size": 63488 00:14:02.473 }, 00:14:02.473 { 00:14:02.473 "name": "BaseBdev3", 00:14:02.473 "uuid": "3cfdeddb-a708-5615-85e0-d0b499bd37a6", 00:14:02.473 "is_configured": true, 00:14:02.473 "data_offset": 2048, 00:14:02.473 "data_size": 63488 00:14:02.473 }, 00:14:02.473 { 00:14:02.473 "name": "BaseBdev4", 00:14:02.473 "uuid": "902fdf0b-bce2-5dd8-a172-c320811bc981", 00:14:02.473 "is_configured": true, 00:14:02.473 "data_offset": 2048, 00:14:02.473 "data_size": 63488 00:14:02.473 } 00:14:02.473 ] 00:14:02.473 }' 00:14:02.473 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.473 10:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.041 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:03.041 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.041 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:03.041 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:03.041 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.041 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.041 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.041 10:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.041 10:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.041 10:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.041 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.041 "name": "raid_bdev1", 00:14:03.041 "uuid": "bafaee23-57ed-446f-9e37-f623f7aa1c84", 00:14:03.041 "strip_size_kb": 0, 00:14:03.041 "state": "online", 00:14:03.041 "raid_level": "raid1", 00:14:03.041 "superblock": true, 00:14:03.041 "num_base_bdevs": 4, 00:14:03.041 "num_base_bdevs_discovered": 3, 00:14:03.041 "num_base_bdevs_operational": 3, 00:14:03.041 "base_bdevs_list": [ 00:14:03.041 { 00:14:03.041 "name": null, 00:14:03.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.041 "is_configured": false, 00:14:03.041 "data_offset": 0, 00:14:03.041 "data_size": 63488 00:14:03.041 }, 00:14:03.041 { 00:14:03.041 "name": "BaseBdev2", 00:14:03.041 "uuid": "0ac2a018-9c02-5d34-8cbc-83a218add6d0", 00:14:03.041 "is_configured": true, 00:14:03.041 "data_offset": 2048, 00:14:03.041 "data_size": 63488 00:14:03.041 }, 00:14:03.041 { 00:14:03.041 "name": "BaseBdev3", 00:14:03.041 "uuid": "3cfdeddb-a708-5615-85e0-d0b499bd37a6", 00:14:03.041 "is_configured": true, 00:14:03.041 "data_offset": 2048, 00:14:03.041 "data_size": 63488 00:14:03.041 }, 00:14:03.041 { 00:14:03.041 "name": "BaseBdev4", 00:14:03.041 "uuid": "902fdf0b-bce2-5dd8-a172-c320811bc981", 00:14:03.041 "is_configured": true, 00:14:03.041 "data_offset": 2048, 00:14:03.041 "data_size": 63488 00:14:03.041 } 00:14:03.041 ] 00:14:03.041 }' 00:14:03.041 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.041 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:03.041 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.041 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:03.041 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:03.041 10:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.041 10:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.041 [2024-11-15 10:59:09.864234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:03.041 [2024-11-15 10:59:09.878832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:14:03.041 10:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.041 10:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:03.041 [2024-11-15 10:59:09.880797] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:03.976 10:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.976 10:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.976 10:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.976 10:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.976 10:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.976 10:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.976 10:59:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.976 10:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.976 10:59:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.236 10:59:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.236 10:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.236 "name": "raid_bdev1", 00:14:04.236 "uuid": "bafaee23-57ed-446f-9e37-f623f7aa1c84", 00:14:04.236 "strip_size_kb": 0, 00:14:04.236 "state": "online", 00:14:04.236 "raid_level": "raid1", 00:14:04.236 "superblock": true, 00:14:04.236 "num_base_bdevs": 4, 00:14:04.236 "num_base_bdevs_discovered": 4, 00:14:04.236 "num_base_bdevs_operational": 4, 00:14:04.236 "process": { 00:14:04.236 "type": "rebuild", 00:14:04.236 "target": "spare", 00:14:04.236 "progress": { 00:14:04.236 "blocks": 20480, 00:14:04.236 "percent": 32 00:14:04.236 } 00:14:04.236 }, 00:14:04.236 "base_bdevs_list": [ 00:14:04.236 { 00:14:04.236 "name": "spare", 00:14:04.236 "uuid": "b8b220c3-90a5-5500-8237-df325bc7e247", 00:14:04.236 "is_configured": true, 00:14:04.236 "data_offset": 2048, 00:14:04.236 "data_size": 63488 00:14:04.236 }, 00:14:04.236 { 00:14:04.236 "name": "BaseBdev2", 00:14:04.236 "uuid": "0ac2a018-9c02-5d34-8cbc-83a218add6d0", 00:14:04.236 "is_configured": true, 00:14:04.236 "data_offset": 2048, 00:14:04.236 "data_size": 63488 00:14:04.236 }, 00:14:04.236 { 00:14:04.236 "name": "BaseBdev3", 00:14:04.236 "uuid": "3cfdeddb-a708-5615-85e0-d0b499bd37a6", 00:14:04.236 "is_configured": true, 00:14:04.236 "data_offset": 2048, 00:14:04.236 "data_size": 63488 00:14:04.236 }, 00:14:04.236 { 00:14:04.236 "name": "BaseBdev4", 00:14:04.236 "uuid": "902fdf0b-bce2-5dd8-a172-c320811bc981", 00:14:04.236 "is_configured": true, 00:14:04.236 "data_offset": 2048, 00:14:04.236 "data_size": 63488 00:14:04.236 } 00:14:04.236 ] 00:14:04.236 }' 00:14:04.236 10:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.236 10:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.236 10:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.236 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.236 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:04.236 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:04.236 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:04.236 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:04.236 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:04.236 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:04.236 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:04.236 10:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.236 10:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.236 [2024-11-15 10:59:11.048111] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:04.496 [2024-11-15 10:59:11.186149] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.496 "name": "raid_bdev1", 00:14:04.496 "uuid": "bafaee23-57ed-446f-9e37-f623f7aa1c84", 00:14:04.496 "strip_size_kb": 0, 00:14:04.496 "state": "online", 00:14:04.496 "raid_level": "raid1", 00:14:04.496 "superblock": true, 00:14:04.496 "num_base_bdevs": 4, 00:14:04.496 "num_base_bdevs_discovered": 3, 00:14:04.496 "num_base_bdevs_operational": 3, 00:14:04.496 "process": { 00:14:04.496 "type": "rebuild", 00:14:04.496 "target": "spare", 00:14:04.496 "progress": { 00:14:04.496 "blocks": 24576, 00:14:04.496 "percent": 38 00:14:04.496 } 00:14:04.496 }, 00:14:04.496 "base_bdevs_list": [ 00:14:04.496 { 00:14:04.496 "name": "spare", 00:14:04.496 "uuid": "b8b220c3-90a5-5500-8237-df325bc7e247", 00:14:04.496 "is_configured": true, 00:14:04.496 "data_offset": 2048, 00:14:04.496 "data_size": 63488 00:14:04.496 }, 00:14:04.496 { 00:14:04.496 "name": null, 00:14:04.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.496 "is_configured": false, 00:14:04.496 "data_offset": 0, 00:14:04.496 "data_size": 63488 00:14:04.496 }, 00:14:04.496 { 00:14:04.496 "name": "BaseBdev3", 00:14:04.496 "uuid": "3cfdeddb-a708-5615-85e0-d0b499bd37a6", 00:14:04.496 "is_configured": true, 00:14:04.496 "data_offset": 2048, 00:14:04.496 "data_size": 63488 00:14:04.496 }, 00:14:04.496 { 00:14:04.496 "name": "BaseBdev4", 00:14:04.496 "uuid": "902fdf0b-bce2-5dd8-a172-c320811bc981", 00:14:04.496 "is_configured": true, 00:14:04.496 "data_offset": 2048, 00:14:04.496 "data_size": 63488 00:14:04.496 } 00:14:04.496 ] 00:14:04.496 }' 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=470 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.496 "name": "raid_bdev1", 00:14:04.496 "uuid": "bafaee23-57ed-446f-9e37-f623f7aa1c84", 00:14:04.496 "strip_size_kb": 0, 00:14:04.496 "state": "online", 00:14:04.496 "raid_level": "raid1", 00:14:04.496 "superblock": true, 00:14:04.496 "num_base_bdevs": 4, 00:14:04.496 "num_base_bdevs_discovered": 3, 00:14:04.496 "num_base_bdevs_operational": 3, 00:14:04.496 "process": { 00:14:04.496 "type": "rebuild", 00:14:04.496 "target": "spare", 00:14:04.496 "progress": { 00:14:04.496 "blocks": 26624, 00:14:04.496 "percent": 41 00:14:04.496 } 00:14:04.496 }, 00:14:04.496 "base_bdevs_list": [ 00:14:04.496 { 00:14:04.496 "name": "spare", 00:14:04.496 "uuid": "b8b220c3-90a5-5500-8237-df325bc7e247", 00:14:04.496 "is_configured": true, 00:14:04.496 "data_offset": 2048, 00:14:04.496 "data_size": 63488 00:14:04.496 }, 00:14:04.496 { 00:14:04.496 "name": null, 00:14:04.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.496 "is_configured": false, 00:14:04.496 "data_offset": 0, 00:14:04.496 "data_size": 63488 00:14:04.496 }, 00:14:04.496 { 00:14:04.496 "name": "BaseBdev3", 00:14:04.496 "uuid": "3cfdeddb-a708-5615-85e0-d0b499bd37a6", 00:14:04.496 "is_configured": true, 00:14:04.496 "data_offset": 2048, 00:14:04.496 "data_size": 63488 00:14:04.496 }, 00:14:04.496 { 00:14:04.496 "name": "BaseBdev4", 00:14:04.496 "uuid": "902fdf0b-bce2-5dd8-a172-c320811bc981", 00:14:04.496 "is_configured": true, 00:14:04.496 "data_offset": 2048, 00:14:04.496 "data_size": 63488 00:14:04.496 } 00:14:04.496 ] 00:14:04.496 }' 00:14:04.496 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.754 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.754 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.754 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.754 10:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:05.692 10:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:05.692 10:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.692 10:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.692 10:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.692 10:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.692 10:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.692 10:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.692 10:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.692 10:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.692 10:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.692 10:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.692 10:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.692 "name": "raid_bdev1", 00:14:05.692 "uuid": "bafaee23-57ed-446f-9e37-f623f7aa1c84", 00:14:05.692 "strip_size_kb": 0, 00:14:05.692 "state": "online", 00:14:05.692 "raid_level": "raid1", 00:14:05.692 "superblock": true, 00:14:05.692 "num_base_bdevs": 4, 00:14:05.692 "num_base_bdevs_discovered": 3, 00:14:05.692 "num_base_bdevs_operational": 3, 00:14:05.692 "process": { 00:14:05.692 "type": "rebuild", 00:14:05.692 "target": "spare", 00:14:05.692 "progress": { 00:14:05.692 "blocks": 51200, 00:14:05.692 "percent": 80 00:14:05.692 } 00:14:05.692 }, 00:14:05.692 "base_bdevs_list": [ 00:14:05.692 { 00:14:05.692 "name": "spare", 00:14:05.692 "uuid": "b8b220c3-90a5-5500-8237-df325bc7e247", 00:14:05.692 "is_configured": true, 00:14:05.692 "data_offset": 2048, 00:14:05.692 "data_size": 63488 00:14:05.692 }, 00:14:05.692 { 00:14:05.692 "name": null, 00:14:05.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.692 "is_configured": false, 00:14:05.692 "data_offset": 0, 00:14:05.692 "data_size": 63488 00:14:05.692 }, 00:14:05.692 { 00:14:05.692 "name": "BaseBdev3", 00:14:05.692 "uuid": "3cfdeddb-a708-5615-85e0-d0b499bd37a6", 00:14:05.692 "is_configured": true, 00:14:05.692 "data_offset": 2048, 00:14:05.692 "data_size": 63488 00:14:05.692 }, 00:14:05.692 { 00:14:05.692 "name": "BaseBdev4", 00:14:05.692 "uuid": "902fdf0b-bce2-5dd8-a172-c320811bc981", 00:14:05.692 "is_configured": true, 00:14:05.692 "data_offset": 2048, 00:14:05.692 "data_size": 63488 00:14:05.692 } 00:14:05.692 ] 00:14:05.692 }' 00:14:05.692 10:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.692 10:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.692 10:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.957 10:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.957 10:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:06.222 [2024-11-15 10:59:13.095114] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:06.222 [2024-11-15 10:59:13.095297] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:06.222 [2024-11-15 10:59:13.095482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.791 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:06.791 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.791 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.791 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.791 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.791 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.791 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.791 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.791 10:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.791 10:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.791 10:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.791 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.791 "name": "raid_bdev1", 00:14:06.791 "uuid": "bafaee23-57ed-446f-9e37-f623f7aa1c84", 00:14:06.791 "strip_size_kb": 0, 00:14:06.791 "state": "online", 00:14:06.791 "raid_level": "raid1", 00:14:06.791 "superblock": true, 00:14:06.791 "num_base_bdevs": 4, 00:14:06.791 "num_base_bdevs_discovered": 3, 00:14:06.791 "num_base_bdevs_operational": 3, 00:14:06.791 "base_bdevs_list": [ 00:14:06.791 { 00:14:06.791 "name": "spare", 00:14:06.791 "uuid": "b8b220c3-90a5-5500-8237-df325bc7e247", 00:14:06.791 "is_configured": true, 00:14:06.791 "data_offset": 2048, 00:14:06.791 "data_size": 63488 00:14:06.791 }, 00:14:06.791 { 00:14:06.791 "name": null, 00:14:06.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.791 "is_configured": false, 00:14:06.791 "data_offset": 0, 00:14:06.791 "data_size": 63488 00:14:06.791 }, 00:14:06.791 { 00:14:06.791 "name": "BaseBdev3", 00:14:06.791 "uuid": "3cfdeddb-a708-5615-85e0-d0b499bd37a6", 00:14:06.791 "is_configured": true, 00:14:06.791 "data_offset": 2048, 00:14:06.791 "data_size": 63488 00:14:06.791 }, 00:14:06.791 { 00:14:06.791 "name": "BaseBdev4", 00:14:06.791 "uuid": "902fdf0b-bce2-5dd8-a172-c320811bc981", 00:14:06.791 "is_configured": true, 00:14:06.791 "data_offset": 2048, 00:14:06.791 "data_size": 63488 00:14:06.791 } 00:14:06.791 ] 00:14:06.791 }' 00:14:06.791 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.049 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:07.049 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.049 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:07.049 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:07.049 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:07.049 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.049 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:07.049 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:07.049 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.049 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.049 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.049 10:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.049 10:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.049 10:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.049 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.049 "name": "raid_bdev1", 00:14:07.049 "uuid": "bafaee23-57ed-446f-9e37-f623f7aa1c84", 00:14:07.049 "strip_size_kb": 0, 00:14:07.049 "state": "online", 00:14:07.049 "raid_level": "raid1", 00:14:07.049 "superblock": true, 00:14:07.049 "num_base_bdevs": 4, 00:14:07.049 "num_base_bdevs_discovered": 3, 00:14:07.049 "num_base_bdevs_operational": 3, 00:14:07.049 "base_bdevs_list": [ 00:14:07.049 { 00:14:07.049 "name": "spare", 00:14:07.049 "uuid": "b8b220c3-90a5-5500-8237-df325bc7e247", 00:14:07.049 "is_configured": true, 00:14:07.049 "data_offset": 2048, 00:14:07.049 "data_size": 63488 00:14:07.049 }, 00:14:07.049 { 00:14:07.049 "name": null, 00:14:07.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.049 "is_configured": false, 00:14:07.049 "data_offset": 0, 00:14:07.049 "data_size": 63488 00:14:07.049 }, 00:14:07.049 { 00:14:07.049 "name": "BaseBdev3", 00:14:07.049 "uuid": "3cfdeddb-a708-5615-85e0-d0b499bd37a6", 00:14:07.049 "is_configured": true, 00:14:07.049 "data_offset": 2048, 00:14:07.049 "data_size": 63488 00:14:07.049 }, 00:14:07.049 { 00:14:07.049 "name": "BaseBdev4", 00:14:07.050 "uuid": "902fdf0b-bce2-5dd8-a172-c320811bc981", 00:14:07.050 "is_configured": true, 00:14:07.050 "data_offset": 2048, 00:14:07.050 "data_size": 63488 00:14:07.050 } 00:14:07.050 ] 00:14:07.050 }' 00:14:07.050 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.050 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:07.050 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.050 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:07.050 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:07.050 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.050 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.050 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.050 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.050 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.050 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.050 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.050 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.050 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.050 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.050 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.050 10:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.050 10:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.050 10:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.308 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.309 "name": "raid_bdev1", 00:14:07.309 "uuid": "bafaee23-57ed-446f-9e37-f623f7aa1c84", 00:14:07.309 "strip_size_kb": 0, 00:14:07.309 "state": "online", 00:14:07.309 "raid_level": "raid1", 00:14:07.309 "superblock": true, 00:14:07.309 "num_base_bdevs": 4, 00:14:07.309 "num_base_bdevs_discovered": 3, 00:14:07.309 "num_base_bdevs_operational": 3, 00:14:07.309 "base_bdevs_list": [ 00:14:07.309 { 00:14:07.309 "name": "spare", 00:14:07.309 "uuid": "b8b220c3-90a5-5500-8237-df325bc7e247", 00:14:07.309 "is_configured": true, 00:14:07.309 "data_offset": 2048, 00:14:07.309 "data_size": 63488 00:14:07.309 }, 00:14:07.309 { 00:14:07.309 "name": null, 00:14:07.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.309 "is_configured": false, 00:14:07.309 "data_offset": 0, 00:14:07.309 "data_size": 63488 00:14:07.309 }, 00:14:07.309 { 00:14:07.309 "name": "BaseBdev3", 00:14:07.309 "uuid": "3cfdeddb-a708-5615-85e0-d0b499bd37a6", 00:14:07.309 "is_configured": true, 00:14:07.309 "data_offset": 2048, 00:14:07.309 "data_size": 63488 00:14:07.309 }, 00:14:07.309 { 00:14:07.309 "name": "BaseBdev4", 00:14:07.309 "uuid": "902fdf0b-bce2-5dd8-a172-c320811bc981", 00:14:07.309 "is_configured": true, 00:14:07.309 "data_offset": 2048, 00:14:07.309 "data_size": 63488 00:14:07.309 } 00:14:07.309 ] 00:14:07.309 }' 00:14:07.309 10:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.309 10:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.568 10:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:07.568 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.568 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.568 [2024-11-15 10:59:14.383539] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:07.568 [2024-11-15 10:59:14.383617] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:07.568 [2024-11-15 10:59:14.383727] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:07.568 [2024-11-15 10:59:14.383822] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:07.568 [2024-11-15 10:59:14.383876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:07.568 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.568 10:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.568 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.568 10:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:07.568 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.568 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.568 10:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:07.568 10:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:07.568 10:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:07.568 10:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:07.568 10:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.568 10:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:07.568 10:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:07.568 10:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:07.568 10:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:07.568 10:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:07.568 10:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:07.568 10:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:07.568 10:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:07.826 /dev/nbd0 00:14:07.826 10:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:07.826 10:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:07.826 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:07.826 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:14:07.826 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:07.826 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:07.826 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:07.826 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:14:07.826 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:07.827 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:07.827 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:07.827 1+0 records in 00:14:07.827 1+0 records out 00:14:07.827 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401202 s, 10.2 MB/s 00:14:07.827 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.827 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:14:07.827 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.827 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:07.827 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:14:07.827 10:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.827 10:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:07.827 10:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:08.084 /dev/nbd1 00:14:08.084 10:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:08.084 10:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:08.084 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:08.084 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:14:08.084 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:08.084 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:08.084 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:08.084 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:14:08.084 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:08.084 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:08.084 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:08.084 1+0 records in 00:14:08.085 1+0 records out 00:14:08.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253497 s, 16.2 MB/s 00:14:08.085 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.085 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:14:08.085 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.085 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:08.085 10:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:14:08.085 10:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:08.085 10:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:08.085 10:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:08.343 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:08.343 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:08.343 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:08.343 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:08.343 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:08.343 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.343 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:08.601 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:08.601 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:08.601 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:08.601 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.601 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.601 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:08.601 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:08.601 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.601 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.601 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.860 [2024-11-15 10:59:15.610103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:08.860 [2024-11-15 10:59:15.610161] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.860 [2024-11-15 10:59:15.610186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:08.860 [2024-11-15 10:59:15.610196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.860 [2024-11-15 10:59:15.612581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.860 [2024-11-15 10:59:15.612620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:08.860 [2024-11-15 10:59:15.612730] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:08.860 [2024-11-15 10:59:15.612784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:08.860 [2024-11-15 10:59:15.612935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:08.860 [2024-11-15 10:59:15.613046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:08.860 spare 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.860 [2024-11-15 10:59:15.712959] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:08.860 [2024-11-15 10:59:15.713010] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:08.860 [2024-11-15 10:59:15.713428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:08.860 [2024-11-15 10:59:15.713656] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:08.860 [2024-11-15 10:59:15.713681] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:08.860 [2024-11-15 10:59:15.713942] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.860 "name": "raid_bdev1", 00:14:08.860 "uuid": "bafaee23-57ed-446f-9e37-f623f7aa1c84", 00:14:08.860 "strip_size_kb": 0, 00:14:08.860 "state": "online", 00:14:08.860 "raid_level": "raid1", 00:14:08.860 "superblock": true, 00:14:08.860 "num_base_bdevs": 4, 00:14:08.860 "num_base_bdevs_discovered": 3, 00:14:08.860 "num_base_bdevs_operational": 3, 00:14:08.860 "base_bdevs_list": [ 00:14:08.860 { 00:14:08.860 "name": "spare", 00:14:08.860 "uuid": "b8b220c3-90a5-5500-8237-df325bc7e247", 00:14:08.860 "is_configured": true, 00:14:08.860 "data_offset": 2048, 00:14:08.860 "data_size": 63488 00:14:08.860 }, 00:14:08.860 { 00:14:08.860 "name": null, 00:14:08.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.860 "is_configured": false, 00:14:08.860 "data_offset": 2048, 00:14:08.860 "data_size": 63488 00:14:08.860 }, 00:14:08.860 { 00:14:08.860 "name": "BaseBdev3", 00:14:08.860 "uuid": "3cfdeddb-a708-5615-85e0-d0b499bd37a6", 00:14:08.860 "is_configured": true, 00:14:08.860 "data_offset": 2048, 00:14:08.860 "data_size": 63488 00:14:08.860 }, 00:14:08.860 { 00:14:08.860 "name": "BaseBdev4", 00:14:08.860 "uuid": "902fdf0b-bce2-5dd8-a172-c320811bc981", 00:14:08.860 "is_configured": true, 00:14:08.860 "data_offset": 2048, 00:14:08.860 "data_size": 63488 00:14:08.860 } 00:14:08.860 ] 00:14:08.860 }' 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.860 10:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.428 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:09.428 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.428 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:09.428 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:09.428 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.428 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.428 10:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.428 10:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.428 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.428 10:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.428 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.428 "name": "raid_bdev1", 00:14:09.428 "uuid": "bafaee23-57ed-446f-9e37-f623f7aa1c84", 00:14:09.428 "strip_size_kb": 0, 00:14:09.428 "state": "online", 00:14:09.429 "raid_level": "raid1", 00:14:09.429 "superblock": true, 00:14:09.429 "num_base_bdevs": 4, 00:14:09.429 "num_base_bdevs_discovered": 3, 00:14:09.429 "num_base_bdevs_operational": 3, 00:14:09.429 "base_bdevs_list": [ 00:14:09.429 { 00:14:09.429 "name": "spare", 00:14:09.429 "uuid": "b8b220c3-90a5-5500-8237-df325bc7e247", 00:14:09.429 "is_configured": true, 00:14:09.429 "data_offset": 2048, 00:14:09.429 "data_size": 63488 00:14:09.429 }, 00:14:09.429 { 00:14:09.429 "name": null, 00:14:09.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.429 "is_configured": false, 00:14:09.429 "data_offset": 2048, 00:14:09.429 "data_size": 63488 00:14:09.429 }, 00:14:09.429 { 00:14:09.429 "name": "BaseBdev3", 00:14:09.429 "uuid": "3cfdeddb-a708-5615-85e0-d0b499bd37a6", 00:14:09.429 "is_configured": true, 00:14:09.429 "data_offset": 2048, 00:14:09.429 "data_size": 63488 00:14:09.429 }, 00:14:09.429 { 00:14:09.429 "name": "BaseBdev4", 00:14:09.429 "uuid": "902fdf0b-bce2-5dd8-a172-c320811bc981", 00:14:09.429 "is_configured": true, 00:14:09.429 "data_offset": 2048, 00:14:09.429 "data_size": 63488 00:14:09.429 } 00:14:09.429 ] 00:14:09.429 }' 00:14:09.429 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.429 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:09.429 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.429 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:09.429 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.429 10:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.429 10:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.429 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:09.429 10:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.689 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.689 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:09.689 10:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.689 10:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.689 [2024-11-15 10:59:16.360918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:09.689 10:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.689 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:09.689 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.689 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.689 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.689 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.689 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:09.689 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.689 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.689 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.689 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.689 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.689 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.689 10:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.689 10:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.689 10:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.689 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.689 "name": "raid_bdev1", 00:14:09.689 "uuid": "bafaee23-57ed-446f-9e37-f623f7aa1c84", 00:14:09.689 "strip_size_kb": 0, 00:14:09.689 "state": "online", 00:14:09.689 "raid_level": "raid1", 00:14:09.689 "superblock": true, 00:14:09.689 "num_base_bdevs": 4, 00:14:09.689 "num_base_bdevs_discovered": 2, 00:14:09.689 "num_base_bdevs_operational": 2, 00:14:09.689 "base_bdevs_list": [ 00:14:09.689 { 00:14:09.689 "name": null, 00:14:09.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.689 "is_configured": false, 00:14:09.689 "data_offset": 0, 00:14:09.689 "data_size": 63488 00:14:09.689 }, 00:14:09.689 { 00:14:09.689 "name": null, 00:14:09.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.689 "is_configured": false, 00:14:09.689 "data_offset": 2048, 00:14:09.689 "data_size": 63488 00:14:09.689 }, 00:14:09.689 { 00:14:09.689 "name": "BaseBdev3", 00:14:09.689 "uuid": "3cfdeddb-a708-5615-85e0-d0b499bd37a6", 00:14:09.689 "is_configured": true, 00:14:09.689 "data_offset": 2048, 00:14:09.689 "data_size": 63488 00:14:09.689 }, 00:14:09.689 { 00:14:09.689 "name": "BaseBdev4", 00:14:09.689 "uuid": "902fdf0b-bce2-5dd8-a172-c320811bc981", 00:14:09.689 "is_configured": true, 00:14:09.689 "data_offset": 2048, 00:14:09.689 "data_size": 63488 00:14:09.689 } 00:14:09.689 ] 00:14:09.689 }' 00:14:09.689 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.689 10:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.949 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:09.949 10:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.949 10:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.949 [2024-11-15 10:59:16.816181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:09.949 [2024-11-15 10:59:16.816423] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:09.949 [2024-11-15 10:59:16.816455] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:09.949 [2024-11-15 10:59:16.816514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:09.949 [2024-11-15 10:59:16.831494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:14:09.949 10:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.949 10:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:09.949 [2024-11-15 10:59:16.833442] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:11.327 10:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.327 10:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.327 10:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.327 10:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.327 10:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.327 10:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.327 10:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.327 10:59:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.327 10:59:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.327 10:59:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.327 10:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.327 "name": "raid_bdev1", 00:14:11.327 "uuid": "bafaee23-57ed-446f-9e37-f623f7aa1c84", 00:14:11.327 "strip_size_kb": 0, 00:14:11.327 "state": "online", 00:14:11.327 "raid_level": "raid1", 00:14:11.327 "superblock": true, 00:14:11.327 "num_base_bdevs": 4, 00:14:11.327 "num_base_bdevs_discovered": 3, 00:14:11.327 "num_base_bdevs_operational": 3, 00:14:11.327 "process": { 00:14:11.327 "type": "rebuild", 00:14:11.327 "target": "spare", 00:14:11.327 "progress": { 00:14:11.327 "blocks": 20480, 00:14:11.327 "percent": 32 00:14:11.327 } 00:14:11.327 }, 00:14:11.327 "base_bdevs_list": [ 00:14:11.327 { 00:14:11.327 "name": "spare", 00:14:11.327 "uuid": "b8b220c3-90a5-5500-8237-df325bc7e247", 00:14:11.327 "is_configured": true, 00:14:11.327 "data_offset": 2048, 00:14:11.327 "data_size": 63488 00:14:11.327 }, 00:14:11.327 { 00:14:11.327 "name": null, 00:14:11.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.327 "is_configured": false, 00:14:11.327 "data_offset": 2048, 00:14:11.327 "data_size": 63488 00:14:11.327 }, 00:14:11.327 { 00:14:11.327 "name": "BaseBdev3", 00:14:11.327 "uuid": "3cfdeddb-a708-5615-85e0-d0b499bd37a6", 00:14:11.327 "is_configured": true, 00:14:11.327 "data_offset": 2048, 00:14:11.327 "data_size": 63488 00:14:11.327 }, 00:14:11.327 { 00:14:11.327 "name": "BaseBdev4", 00:14:11.327 "uuid": "902fdf0b-bce2-5dd8-a172-c320811bc981", 00:14:11.327 "is_configured": true, 00:14:11.327 "data_offset": 2048, 00:14:11.327 "data_size": 63488 00:14:11.327 } 00:14:11.327 ] 00:14:11.327 }' 00:14:11.327 10:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.327 10:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:11.327 10:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.327 10:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:11.327 10:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:11.327 10:59:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.327 10:59:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.327 [2024-11-15 10:59:17.993146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:11.327 [2024-11-15 10:59:18.038808] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:11.327 [2024-11-15 10:59:18.038882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.327 [2024-11-15 10:59:18.038903] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:11.327 [2024-11-15 10:59:18.038911] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:11.327 10:59:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.327 10:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:11.327 10:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.327 10:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.327 10:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.327 10:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.327 10:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:11.327 10:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.327 10:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.327 10:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.327 10:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.327 10:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.327 10:59:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.327 10:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.327 10:59:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.327 10:59:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.327 10:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.327 "name": "raid_bdev1", 00:14:11.327 "uuid": "bafaee23-57ed-446f-9e37-f623f7aa1c84", 00:14:11.327 "strip_size_kb": 0, 00:14:11.327 "state": "online", 00:14:11.327 "raid_level": "raid1", 00:14:11.327 "superblock": true, 00:14:11.327 "num_base_bdevs": 4, 00:14:11.327 "num_base_bdevs_discovered": 2, 00:14:11.327 "num_base_bdevs_operational": 2, 00:14:11.327 "base_bdevs_list": [ 00:14:11.327 { 00:14:11.327 "name": null, 00:14:11.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.327 "is_configured": false, 00:14:11.327 "data_offset": 0, 00:14:11.327 "data_size": 63488 00:14:11.327 }, 00:14:11.327 { 00:14:11.327 "name": null, 00:14:11.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.327 "is_configured": false, 00:14:11.327 "data_offset": 2048, 00:14:11.327 "data_size": 63488 00:14:11.327 }, 00:14:11.327 { 00:14:11.327 "name": "BaseBdev3", 00:14:11.327 "uuid": "3cfdeddb-a708-5615-85e0-d0b499bd37a6", 00:14:11.327 "is_configured": true, 00:14:11.327 "data_offset": 2048, 00:14:11.327 "data_size": 63488 00:14:11.327 }, 00:14:11.328 { 00:14:11.328 "name": "BaseBdev4", 00:14:11.328 "uuid": "902fdf0b-bce2-5dd8-a172-c320811bc981", 00:14:11.328 "is_configured": true, 00:14:11.328 "data_offset": 2048, 00:14:11.328 "data_size": 63488 00:14:11.328 } 00:14:11.328 ] 00:14:11.328 }' 00:14:11.328 10:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.328 10:59:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.588 10:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:11.588 10:59:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.588 10:59:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.588 [2024-11-15 10:59:18.499120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:11.588 [2024-11-15 10:59:18.499207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.588 [2024-11-15 10:59:18.499239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:11.588 [2024-11-15 10:59:18.499249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.588 [2024-11-15 10:59:18.499728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.588 [2024-11-15 10:59:18.499760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:11.588 [2024-11-15 10:59:18.499860] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:11.588 [2024-11-15 10:59:18.499874] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:11.588 [2024-11-15 10:59:18.499888] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:11.588 [2024-11-15 10:59:18.499922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:11.848 [2024-11-15 10:59:18.513864] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:11.848 spare 00:14:11.848 10:59:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.848 10:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:11.848 [2024-11-15 10:59:18.515710] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:12.787 10:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.787 10:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.787 10:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.787 10:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.787 10:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.787 10:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.787 10:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.787 10:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.787 10:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.787 10:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.787 10:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.787 "name": "raid_bdev1", 00:14:12.787 "uuid": "bafaee23-57ed-446f-9e37-f623f7aa1c84", 00:14:12.787 "strip_size_kb": 0, 00:14:12.787 "state": "online", 00:14:12.787 "raid_level": "raid1", 00:14:12.787 "superblock": true, 00:14:12.787 "num_base_bdevs": 4, 00:14:12.787 "num_base_bdevs_discovered": 3, 00:14:12.787 "num_base_bdevs_operational": 3, 00:14:12.787 "process": { 00:14:12.787 "type": "rebuild", 00:14:12.787 "target": "spare", 00:14:12.787 "progress": { 00:14:12.787 "blocks": 20480, 00:14:12.787 "percent": 32 00:14:12.787 } 00:14:12.787 }, 00:14:12.787 "base_bdevs_list": [ 00:14:12.787 { 00:14:12.787 "name": "spare", 00:14:12.787 "uuid": "b8b220c3-90a5-5500-8237-df325bc7e247", 00:14:12.787 "is_configured": true, 00:14:12.787 "data_offset": 2048, 00:14:12.787 "data_size": 63488 00:14:12.787 }, 00:14:12.787 { 00:14:12.787 "name": null, 00:14:12.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.787 "is_configured": false, 00:14:12.787 "data_offset": 2048, 00:14:12.787 "data_size": 63488 00:14:12.787 }, 00:14:12.787 { 00:14:12.787 "name": "BaseBdev3", 00:14:12.787 "uuid": "3cfdeddb-a708-5615-85e0-d0b499bd37a6", 00:14:12.787 "is_configured": true, 00:14:12.787 "data_offset": 2048, 00:14:12.787 "data_size": 63488 00:14:12.787 }, 00:14:12.787 { 00:14:12.787 "name": "BaseBdev4", 00:14:12.787 "uuid": "902fdf0b-bce2-5dd8-a172-c320811bc981", 00:14:12.787 "is_configured": true, 00:14:12.787 "data_offset": 2048, 00:14:12.787 "data_size": 63488 00:14:12.787 } 00:14:12.787 ] 00:14:12.787 }' 00:14:12.787 10:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.787 10:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.787 10:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.787 10:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.787 10:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:12.787 10:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.787 10:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.787 [2024-11-15 10:59:19.651832] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:13.047 [2024-11-15 10:59:19.720749] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:13.047 [2024-11-15 10:59:19.720809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.047 [2024-11-15 10:59:19.720825] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:13.047 [2024-11-15 10:59:19.720834] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:13.047 10:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.047 10:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:13.047 10:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.047 10:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.047 10:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:13.047 10:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:13.047 10:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:13.047 10:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.047 10:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.047 10:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.047 10:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.047 10:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.047 10:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.047 10:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.047 10:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.047 10:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.047 10:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.047 "name": "raid_bdev1", 00:14:13.047 "uuid": "bafaee23-57ed-446f-9e37-f623f7aa1c84", 00:14:13.047 "strip_size_kb": 0, 00:14:13.047 "state": "online", 00:14:13.047 "raid_level": "raid1", 00:14:13.047 "superblock": true, 00:14:13.047 "num_base_bdevs": 4, 00:14:13.047 "num_base_bdevs_discovered": 2, 00:14:13.048 "num_base_bdevs_operational": 2, 00:14:13.048 "base_bdevs_list": [ 00:14:13.048 { 00:14:13.048 "name": null, 00:14:13.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.048 "is_configured": false, 00:14:13.048 "data_offset": 0, 00:14:13.048 "data_size": 63488 00:14:13.048 }, 00:14:13.048 { 00:14:13.048 "name": null, 00:14:13.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.048 "is_configured": false, 00:14:13.048 "data_offset": 2048, 00:14:13.048 "data_size": 63488 00:14:13.048 }, 00:14:13.048 { 00:14:13.048 "name": "BaseBdev3", 00:14:13.048 "uuid": "3cfdeddb-a708-5615-85e0-d0b499bd37a6", 00:14:13.048 "is_configured": true, 00:14:13.048 "data_offset": 2048, 00:14:13.048 "data_size": 63488 00:14:13.048 }, 00:14:13.048 { 00:14:13.048 "name": "BaseBdev4", 00:14:13.048 "uuid": "902fdf0b-bce2-5dd8-a172-c320811bc981", 00:14:13.048 "is_configured": true, 00:14:13.048 "data_offset": 2048, 00:14:13.048 "data_size": 63488 00:14:13.048 } 00:14:13.048 ] 00:14:13.048 }' 00:14:13.048 10:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.048 10:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.308 10:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:13.308 10:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.308 10:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:13.308 10:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:13.308 10:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.308 10:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.308 10:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.308 10:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.308 10:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.308 10:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.308 10:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.308 "name": "raid_bdev1", 00:14:13.308 "uuid": "bafaee23-57ed-446f-9e37-f623f7aa1c84", 00:14:13.308 "strip_size_kb": 0, 00:14:13.308 "state": "online", 00:14:13.308 "raid_level": "raid1", 00:14:13.308 "superblock": true, 00:14:13.308 "num_base_bdevs": 4, 00:14:13.308 "num_base_bdevs_discovered": 2, 00:14:13.308 "num_base_bdevs_operational": 2, 00:14:13.308 "base_bdevs_list": [ 00:14:13.308 { 00:14:13.308 "name": null, 00:14:13.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.308 "is_configured": false, 00:14:13.308 "data_offset": 0, 00:14:13.308 "data_size": 63488 00:14:13.308 }, 00:14:13.308 { 00:14:13.308 "name": null, 00:14:13.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.308 "is_configured": false, 00:14:13.308 "data_offset": 2048, 00:14:13.308 "data_size": 63488 00:14:13.308 }, 00:14:13.308 { 00:14:13.308 "name": "BaseBdev3", 00:14:13.308 "uuid": "3cfdeddb-a708-5615-85e0-d0b499bd37a6", 00:14:13.308 "is_configured": true, 00:14:13.308 "data_offset": 2048, 00:14:13.308 "data_size": 63488 00:14:13.308 }, 00:14:13.308 { 00:14:13.308 "name": "BaseBdev4", 00:14:13.308 "uuid": "902fdf0b-bce2-5dd8-a172-c320811bc981", 00:14:13.308 "is_configured": true, 00:14:13.308 "data_offset": 2048, 00:14:13.308 "data_size": 63488 00:14:13.308 } 00:14:13.308 ] 00:14:13.308 }' 00:14:13.308 10:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.308 10:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:13.308 10:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.567 10:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:13.567 10:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:13.567 10:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.567 10:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.567 10:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.567 10:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:13.567 10:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.567 10:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.567 [2024-11-15 10:59:20.261782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:13.567 [2024-11-15 10:59:20.261849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.567 [2024-11-15 10:59:20.261870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:13.567 [2024-11-15 10:59:20.261882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.567 [2024-11-15 10:59:20.262343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.567 [2024-11-15 10:59:20.262371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:13.567 [2024-11-15 10:59:20.262457] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:13.567 [2024-11-15 10:59:20.262474] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:13.567 [2024-11-15 10:59:20.262482] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:13.567 [2024-11-15 10:59:20.262505] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:13.567 BaseBdev1 00:14:13.567 10:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.567 10:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:14.553 10:59:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:14.553 10:59:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.553 10:59:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.553 10:59:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.553 10:59:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.553 10:59:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:14.553 10:59:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.553 10:59:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.553 10:59:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.553 10:59:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.553 10:59:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.553 10:59:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.553 10:59:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.553 10:59:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.553 10:59:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.553 10:59:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.553 "name": "raid_bdev1", 00:14:14.553 "uuid": "bafaee23-57ed-446f-9e37-f623f7aa1c84", 00:14:14.553 "strip_size_kb": 0, 00:14:14.553 "state": "online", 00:14:14.553 "raid_level": "raid1", 00:14:14.553 "superblock": true, 00:14:14.553 "num_base_bdevs": 4, 00:14:14.553 "num_base_bdevs_discovered": 2, 00:14:14.553 "num_base_bdevs_operational": 2, 00:14:14.553 "base_bdevs_list": [ 00:14:14.553 { 00:14:14.553 "name": null, 00:14:14.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.553 "is_configured": false, 00:14:14.553 "data_offset": 0, 00:14:14.553 "data_size": 63488 00:14:14.553 }, 00:14:14.553 { 00:14:14.553 "name": null, 00:14:14.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.553 "is_configured": false, 00:14:14.553 "data_offset": 2048, 00:14:14.553 "data_size": 63488 00:14:14.553 }, 00:14:14.553 { 00:14:14.553 "name": "BaseBdev3", 00:14:14.553 "uuid": "3cfdeddb-a708-5615-85e0-d0b499bd37a6", 00:14:14.553 "is_configured": true, 00:14:14.553 "data_offset": 2048, 00:14:14.553 "data_size": 63488 00:14:14.553 }, 00:14:14.553 { 00:14:14.553 "name": "BaseBdev4", 00:14:14.553 "uuid": "902fdf0b-bce2-5dd8-a172-c320811bc981", 00:14:14.553 "is_configured": true, 00:14:14.553 "data_offset": 2048, 00:14:14.553 "data_size": 63488 00:14:14.553 } 00:14:14.553 ] 00:14:14.553 }' 00:14:14.553 10:59:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.553 10:59:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.811 10:59:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:14.811 10:59:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.811 10:59:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:14.811 10:59:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:14.811 10:59:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.811 10:59:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.811 10:59:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.811 10:59:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.812 10:59:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.812 10:59:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.812 10:59:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.812 "name": "raid_bdev1", 00:14:14.812 "uuid": "bafaee23-57ed-446f-9e37-f623f7aa1c84", 00:14:14.812 "strip_size_kb": 0, 00:14:14.812 "state": "online", 00:14:14.812 "raid_level": "raid1", 00:14:14.812 "superblock": true, 00:14:14.812 "num_base_bdevs": 4, 00:14:14.812 "num_base_bdevs_discovered": 2, 00:14:14.812 "num_base_bdevs_operational": 2, 00:14:14.812 "base_bdevs_list": [ 00:14:14.812 { 00:14:14.812 "name": null, 00:14:14.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.812 "is_configured": false, 00:14:14.812 "data_offset": 0, 00:14:14.812 "data_size": 63488 00:14:14.812 }, 00:14:14.812 { 00:14:14.812 "name": null, 00:14:14.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.812 "is_configured": false, 00:14:14.812 "data_offset": 2048, 00:14:14.812 "data_size": 63488 00:14:14.812 }, 00:14:14.812 { 00:14:14.812 "name": "BaseBdev3", 00:14:14.812 "uuid": "3cfdeddb-a708-5615-85e0-d0b499bd37a6", 00:14:14.812 "is_configured": true, 00:14:14.812 "data_offset": 2048, 00:14:14.812 "data_size": 63488 00:14:14.812 }, 00:14:14.812 { 00:14:14.812 "name": "BaseBdev4", 00:14:14.812 "uuid": "902fdf0b-bce2-5dd8-a172-c320811bc981", 00:14:14.812 "is_configured": true, 00:14:14.812 "data_offset": 2048, 00:14:14.812 "data_size": 63488 00:14:14.812 } 00:14:14.812 ] 00:14:14.812 }' 00:14:14.812 10:59:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.071 10:59:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:15.071 10:59:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.071 10:59:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:15.071 10:59:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:15.071 10:59:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:15.071 10:59:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:15.071 10:59:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:15.071 10:59:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:15.071 10:59:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:15.071 10:59:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:15.071 10:59:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:15.071 10:59:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.071 10:59:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.071 [2024-11-15 10:59:21.799204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:15.071 [2024-11-15 10:59:21.799458] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:15.071 [2024-11-15 10:59:21.799477] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:15.071 request: 00:14:15.071 { 00:14:15.071 "base_bdev": "BaseBdev1", 00:14:15.071 "raid_bdev": "raid_bdev1", 00:14:15.071 "method": "bdev_raid_add_base_bdev", 00:14:15.071 "req_id": 1 00:14:15.071 } 00:14:15.071 Got JSON-RPC error response 00:14:15.071 response: 00:14:15.071 { 00:14:15.071 "code": -22, 00:14:15.071 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:15.071 } 00:14:15.071 10:59:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:15.071 10:59:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:15.071 10:59:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:15.071 10:59:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:15.071 10:59:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:15.071 10:59:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:16.006 10:59:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:16.006 10:59:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.006 10:59:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.006 10:59:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.006 10:59:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.006 10:59:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:16.006 10:59:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.006 10:59:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.006 10:59:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.006 10:59:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.006 10:59:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.006 10:59:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.006 10:59:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.006 10:59:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.006 10:59:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.006 10:59:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.006 "name": "raid_bdev1", 00:14:16.006 "uuid": "bafaee23-57ed-446f-9e37-f623f7aa1c84", 00:14:16.006 "strip_size_kb": 0, 00:14:16.006 "state": "online", 00:14:16.006 "raid_level": "raid1", 00:14:16.006 "superblock": true, 00:14:16.006 "num_base_bdevs": 4, 00:14:16.006 "num_base_bdevs_discovered": 2, 00:14:16.006 "num_base_bdevs_operational": 2, 00:14:16.006 "base_bdevs_list": [ 00:14:16.006 { 00:14:16.006 "name": null, 00:14:16.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.006 "is_configured": false, 00:14:16.006 "data_offset": 0, 00:14:16.006 "data_size": 63488 00:14:16.006 }, 00:14:16.006 { 00:14:16.006 "name": null, 00:14:16.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.006 "is_configured": false, 00:14:16.006 "data_offset": 2048, 00:14:16.006 "data_size": 63488 00:14:16.006 }, 00:14:16.006 { 00:14:16.006 "name": "BaseBdev3", 00:14:16.006 "uuid": "3cfdeddb-a708-5615-85e0-d0b499bd37a6", 00:14:16.006 "is_configured": true, 00:14:16.006 "data_offset": 2048, 00:14:16.006 "data_size": 63488 00:14:16.006 }, 00:14:16.006 { 00:14:16.006 "name": "BaseBdev4", 00:14:16.006 "uuid": "902fdf0b-bce2-5dd8-a172-c320811bc981", 00:14:16.006 "is_configured": true, 00:14:16.006 "data_offset": 2048, 00:14:16.006 "data_size": 63488 00:14:16.006 } 00:14:16.006 ] 00:14:16.006 }' 00:14:16.006 10:59:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.006 10:59:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.576 10:59:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:16.576 10:59:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.576 10:59:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:16.576 10:59:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:16.576 10:59:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.576 10:59:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.576 10:59:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.576 10:59:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.576 10:59:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.576 10:59:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.576 10:59:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.576 "name": "raid_bdev1", 00:14:16.576 "uuid": "bafaee23-57ed-446f-9e37-f623f7aa1c84", 00:14:16.576 "strip_size_kb": 0, 00:14:16.576 "state": "online", 00:14:16.576 "raid_level": "raid1", 00:14:16.576 "superblock": true, 00:14:16.576 "num_base_bdevs": 4, 00:14:16.576 "num_base_bdevs_discovered": 2, 00:14:16.576 "num_base_bdevs_operational": 2, 00:14:16.576 "base_bdevs_list": [ 00:14:16.576 { 00:14:16.576 "name": null, 00:14:16.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.576 "is_configured": false, 00:14:16.576 "data_offset": 0, 00:14:16.576 "data_size": 63488 00:14:16.576 }, 00:14:16.576 { 00:14:16.576 "name": null, 00:14:16.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.576 "is_configured": false, 00:14:16.576 "data_offset": 2048, 00:14:16.576 "data_size": 63488 00:14:16.576 }, 00:14:16.576 { 00:14:16.576 "name": "BaseBdev3", 00:14:16.576 "uuid": "3cfdeddb-a708-5615-85e0-d0b499bd37a6", 00:14:16.576 "is_configured": true, 00:14:16.576 "data_offset": 2048, 00:14:16.576 "data_size": 63488 00:14:16.576 }, 00:14:16.576 { 00:14:16.576 "name": "BaseBdev4", 00:14:16.576 "uuid": "902fdf0b-bce2-5dd8-a172-c320811bc981", 00:14:16.576 "is_configured": true, 00:14:16.576 "data_offset": 2048, 00:14:16.576 "data_size": 63488 00:14:16.576 } 00:14:16.576 ] 00:14:16.576 }' 00:14:16.576 10:59:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.576 10:59:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:16.576 10:59:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.576 10:59:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:16.576 10:59:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78163 00:14:16.576 10:59:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 78163 ']' 00:14:16.576 10:59:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 78163 00:14:16.576 10:59:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:14:16.576 10:59:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:16.576 10:59:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78163 00:14:16.576 10:59:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:16.576 10:59:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:16.576 killing process with pid 78163 00:14:16.576 10:59:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78163' 00:14:16.576 10:59:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 78163 00:14:16.576 Received shutdown signal, test time was about 60.000000 seconds 00:14:16.576 00:14:16.576 Latency(us) 00:14:16.576 [2024-11-15T10:59:23.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.576 [2024-11-15T10:59:23.504Z] =================================================================================================================== 00:14:16.576 [2024-11-15T10:59:23.504Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:16.576 [2024-11-15 10:59:23.407476] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:16.576 10:59:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 78163 00:14:16.576 [2024-11-15 10:59:23.407603] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:16.576 [2024-11-15 10:59:23.407676] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:16.576 [2024-11-15 10:59:23.407687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:17.146 [2024-11-15 10:59:23.919521] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:18.525 10:59:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:18.525 00:14:18.525 real 0m25.328s 00:14:18.525 user 0m30.227s 00:14:18.525 sys 0m3.849s 00:14:18.525 10:59:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:18.525 10:59:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.525 ************************************ 00:14:18.525 END TEST raid_rebuild_test_sb 00:14:18.525 ************************************ 00:14:18.525 10:59:25 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:18.525 10:59:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:18.525 10:59:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:18.525 10:59:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:18.525 ************************************ 00:14:18.525 START TEST raid_rebuild_test_io 00:14:18.525 ************************************ 00:14:18.525 10:59:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false true true 00:14:18.525 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:18.525 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:18.525 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:18.525 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:18.525 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:18.525 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:18.525 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:18.525 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:18.525 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:18.525 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:18.525 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:18.525 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:18.525 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:18.525 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:18.525 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:18.525 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:18.525 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:18.525 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:18.525 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:18.525 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:18.525 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:18.525 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:18.525 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:18.526 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:18.526 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:18.526 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:18.526 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:18.526 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:18.526 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:18.526 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78922 00:14:18.526 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78922 00:14:18.526 10:59:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 78922 ']' 00:14:18.526 10:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:18.526 10:59:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.526 10:59:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:18.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.526 10:59:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.526 10:59:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:18.526 10:59:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.526 [2024-11-15 10:59:25.234103] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:14:18.526 [2024-11-15 10:59:25.234247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78922 ] 00:14:18.526 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:18.526 Zero copy mechanism will not be used. 00:14:18.526 [2024-11-15 10:59:25.388922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.784 [2024-11-15 10:59:25.509745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.044 [2024-11-15 10:59:25.727784] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.044 [2024-11-15 10:59:25.727828] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.302 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:19.303 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:14:19.303 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:19.303 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:19.303 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.303 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.303 BaseBdev1_malloc 00:14:19.303 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.303 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:19.303 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.303 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.303 [2024-11-15 10:59:26.138220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:19.303 [2024-11-15 10:59:26.138291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.303 [2024-11-15 10:59:26.138326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:19.303 [2024-11-15 10:59:26.138338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.303 [2024-11-15 10:59:26.140606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.303 [2024-11-15 10:59:26.140648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:19.303 BaseBdev1 00:14:19.303 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.303 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:19.303 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:19.303 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.303 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.303 BaseBdev2_malloc 00:14:19.303 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.303 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:19.303 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.303 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.303 [2024-11-15 10:59:26.193479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:19.303 [2024-11-15 10:59:26.193541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.303 [2024-11-15 10:59:26.193578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:19.303 [2024-11-15 10:59:26.193589] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.303 [2024-11-15 10:59:26.195735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.303 [2024-11-15 10:59:26.195770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:19.303 BaseBdev2 00:14:19.303 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.303 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:19.303 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:19.303 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.303 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.563 BaseBdev3_malloc 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.563 [2024-11-15 10:59:26.263372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:19.563 [2024-11-15 10:59:26.263433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.563 [2024-11-15 10:59:26.263470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:19.563 [2024-11-15 10:59:26.263483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.563 [2024-11-15 10:59:26.265720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.563 [2024-11-15 10:59:26.265763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:19.563 BaseBdev3 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.563 BaseBdev4_malloc 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.563 [2024-11-15 10:59:26.319945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:19.563 [2024-11-15 10:59:26.320020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.563 [2024-11-15 10:59:26.320038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:19.563 [2024-11-15 10:59:26.320049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.563 [2024-11-15 10:59:26.322246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.563 [2024-11-15 10:59:26.322283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:19.563 BaseBdev4 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.563 spare_malloc 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.563 spare_delay 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.563 [2024-11-15 10:59:26.387777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:19.563 [2024-11-15 10:59:26.387835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.563 [2024-11-15 10:59:26.387855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:19.563 [2024-11-15 10:59:26.387865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.563 [2024-11-15 10:59:26.389973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.563 [2024-11-15 10:59:26.390009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:19.563 spare 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.563 [2024-11-15 10:59:26.399807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:19.563 [2024-11-15 10:59:26.401641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:19.563 [2024-11-15 10:59:26.401712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:19.563 [2024-11-15 10:59:26.401765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:19.563 [2024-11-15 10:59:26.401845] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:19.563 [2024-11-15 10:59:26.401869] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:19.563 [2024-11-15 10:59:26.402114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:19.563 [2024-11-15 10:59:26.402297] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:19.563 [2024-11-15 10:59:26.402327] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:19.563 [2024-11-15 10:59:26.402517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.563 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.564 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.564 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.564 "name": "raid_bdev1", 00:14:19.564 "uuid": "907615a6-457e-4317-b961-229fdf5fa7e6", 00:14:19.564 "strip_size_kb": 0, 00:14:19.564 "state": "online", 00:14:19.564 "raid_level": "raid1", 00:14:19.564 "superblock": false, 00:14:19.564 "num_base_bdevs": 4, 00:14:19.564 "num_base_bdevs_discovered": 4, 00:14:19.564 "num_base_bdevs_operational": 4, 00:14:19.564 "base_bdevs_list": [ 00:14:19.564 { 00:14:19.564 "name": "BaseBdev1", 00:14:19.564 "uuid": "dc4ce4e0-959e-583f-9182-f838c4b9c585", 00:14:19.564 "is_configured": true, 00:14:19.564 "data_offset": 0, 00:14:19.564 "data_size": 65536 00:14:19.564 }, 00:14:19.564 { 00:14:19.564 "name": "BaseBdev2", 00:14:19.564 "uuid": "8501292e-8298-5632-af79-d2efdb28f9fc", 00:14:19.564 "is_configured": true, 00:14:19.564 "data_offset": 0, 00:14:19.564 "data_size": 65536 00:14:19.564 }, 00:14:19.564 { 00:14:19.564 "name": "BaseBdev3", 00:14:19.564 "uuid": "93b3cb1f-f602-5486-8336-a64758631ab5", 00:14:19.564 "is_configured": true, 00:14:19.564 "data_offset": 0, 00:14:19.564 "data_size": 65536 00:14:19.564 }, 00:14:19.564 { 00:14:19.564 "name": "BaseBdev4", 00:14:19.564 "uuid": "95409ab5-a969-5812-b6dd-1961b2b680a1", 00:14:19.564 "is_configured": true, 00:14:19.564 "data_offset": 0, 00:14:19.564 "data_size": 65536 00:14:19.564 } 00:14:19.564 ] 00:14:19.564 }' 00:14:19.564 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.564 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.131 [2024-11-15 10:59:26.903316] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.131 [2024-11-15 10:59:26.978827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.131 10:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.131 10:59:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.131 10:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.131 "name": "raid_bdev1", 00:14:20.131 "uuid": "907615a6-457e-4317-b961-229fdf5fa7e6", 00:14:20.131 "strip_size_kb": 0, 00:14:20.131 "state": "online", 00:14:20.131 "raid_level": "raid1", 00:14:20.131 "superblock": false, 00:14:20.131 "num_base_bdevs": 4, 00:14:20.131 "num_base_bdevs_discovered": 3, 00:14:20.131 "num_base_bdevs_operational": 3, 00:14:20.131 "base_bdevs_list": [ 00:14:20.131 { 00:14:20.131 "name": null, 00:14:20.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.131 "is_configured": false, 00:14:20.131 "data_offset": 0, 00:14:20.131 "data_size": 65536 00:14:20.131 }, 00:14:20.131 { 00:14:20.131 "name": "BaseBdev2", 00:14:20.131 "uuid": "8501292e-8298-5632-af79-d2efdb28f9fc", 00:14:20.131 "is_configured": true, 00:14:20.131 "data_offset": 0, 00:14:20.131 "data_size": 65536 00:14:20.131 }, 00:14:20.131 { 00:14:20.131 "name": "BaseBdev3", 00:14:20.131 "uuid": "93b3cb1f-f602-5486-8336-a64758631ab5", 00:14:20.131 "is_configured": true, 00:14:20.131 "data_offset": 0, 00:14:20.131 "data_size": 65536 00:14:20.131 }, 00:14:20.131 { 00:14:20.131 "name": "BaseBdev4", 00:14:20.131 "uuid": "95409ab5-a969-5812-b6dd-1961b2b680a1", 00:14:20.131 "is_configured": true, 00:14:20.132 "data_offset": 0, 00:14:20.132 "data_size": 65536 00:14:20.132 } 00:14:20.132 ] 00:14:20.132 }' 00:14:20.132 10:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.132 10:59:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.391 [2024-11-15 10:59:27.079353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:20.391 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:20.391 Zero copy mechanism will not be used. 00:14:20.391 Running I/O for 60 seconds... 00:14:20.650 10:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:20.650 10:59:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.650 10:59:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.650 [2024-11-15 10:59:27.473613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:20.650 10:59:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.650 10:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:20.650 [2024-11-15 10:59:27.543159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:20.650 [2024-11-15 10:59:27.545195] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:20.909 [2024-11-15 10:59:27.661597] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:20.909 [2024-11-15 10:59:27.662162] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:21.169 [2024-11-15 10:59:27.885459] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:21.169 [2024-11-15 10:59:27.885793] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:21.427 131.00 IOPS, 393.00 MiB/s [2024-11-15T10:59:28.355Z] [2024-11-15 10:59:28.228120] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:21.687 [2024-11-15 10:59:28.445820] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:21.687 10:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.687 10:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.687 10:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.687 10:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.687 10:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.687 10:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.687 10:59:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.687 10:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.687 10:59:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.687 10:59:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.687 10:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.687 "name": "raid_bdev1", 00:14:21.687 "uuid": "907615a6-457e-4317-b961-229fdf5fa7e6", 00:14:21.687 "strip_size_kb": 0, 00:14:21.687 "state": "online", 00:14:21.687 "raid_level": "raid1", 00:14:21.687 "superblock": false, 00:14:21.687 "num_base_bdevs": 4, 00:14:21.687 "num_base_bdevs_discovered": 4, 00:14:21.687 "num_base_bdevs_operational": 4, 00:14:21.687 "process": { 00:14:21.687 "type": "rebuild", 00:14:21.687 "target": "spare", 00:14:21.687 "progress": { 00:14:21.687 "blocks": 10240, 00:14:21.687 "percent": 15 00:14:21.687 } 00:14:21.687 }, 00:14:21.687 "base_bdevs_list": [ 00:14:21.687 { 00:14:21.687 "name": "spare", 00:14:21.687 "uuid": "2ecde64e-b982-5efa-9ca7-057d67e1ce29", 00:14:21.687 "is_configured": true, 00:14:21.687 "data_offset": 0, 00:14:21.687 "data_size": 65536 00:14:21.687 }, 00:14:21.687 { 00:14:21.687 "name": "BaseBdev2", 00:14:21.687 "uuid": "8501292e-8298-5632-af79-d2efdb28f9fc", 00:14:21.687 "is_configured": true, 00:14:21.687 "data_offset": 0, 00:14:21.687 "data_size": 65536 00:14:21.687 }, 00:14:21.687 { 00:14:21.687 "name": "BaseBdev3", 00:14:21.687 "uuid": "93b3cb1f-f602-5486-8336-a64758631ab5", 00:14:21.687 "is_configured": true, 00:14:21.687 "data_offset": 0, 00:14:21.687 "data_size": 65536 00:14:21.687 }, 00:14:21.687 { 00:14:21.687 "name": "BaseBdev4", 00:14:21.687 "uuid": "95409ab5-a969-5812-b6dd-1961b2b680a1", 00:14:21.687 "is_configured": true, 00:14:21.687 "data_offset": 0, 00:14:21.687 "data_size": 65536 00:14:21.687 } 00:14:21.687 ] 00:14:21.687 }' 00:14:21.687 10:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.946 10:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:21.946 10:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.946 10:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.946 10:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:21.946 10:59:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.946 10:59:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.946 [2024-11-15 10:59:28.672525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:22.205 [2024-11-15 10:59:28.894509] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:22.205 [2024-11-15 10:59:28.898406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.205 [2024-11-15 10:59:28.898457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:22.205 [2024-11-15 10:59:28.898472] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:22.205 [2024-11-15 10:59:28.941295] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:22.205 10:59:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.205 10:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:22.205 10:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.205 10:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.205 10:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.205 10:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.205 10:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:22.205 10:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.205 10:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.205 10:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.205 10:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.205 10:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.205 10:59:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.205 10:59:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.205 10:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.205 10:59:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.205 10:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.205 "name": "raid_bdev1", 00:14:22.205 "uuid": "907615a6-457e-4317-b961-229fdf5fa7e6", 00:14:22.205 "strip_size_kb": 0, 00:14:22.205 "state": "online", 00:14:22.205 "raid_level": "raid1", 00:14:22.205 "superblock": false, 00:14:22.205 "num_base_bdevs": 4, 00:14:22.205 "num_base_bdevs_discovered": 3, 00:14:22.205 "num_base_bdevs_operational": 3, 00:14:22.205 "base_bdevs_list": [ 00:14:22.205 { 00:14:22.205 "name": null, 00:14:22.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.205 "is_configured": false, 00:14:22.205 "data_offset": 0, 00:14:22.205 "data_size": 65536 00:14:22.205 }, 00:14:22.205 { 00:14:22.205 "name": "BaseBdev2", 00:14:22.205 "uuid": "8501292e-8298-5632-af79-d2efdb28f9fc", 00:14:22.205 "is_configured": true, 00:14:22.205 "data_offset": 0, 00:14:22.205 "data_size": 65536 00:14:22.205 }, 00:14:22.205 { 00:14:22.205 "name": "BaseBdev3", 00:14:22.205 "uuid": "93b3cb1f-f602-5486-8336-a64758631ab5", 00:14:22.205 "is_configured": true, 00:14:22.205 "data_offset": 0, 00:14:22.205 "data_size": 65536 00:14:22.205 }, 00:14:22.205 { 00:14:22.205 "name": "BaseBdev4", 00:14:22.205 "uuid": "95409ab5-a969-5812-b6dd-1961b2b680a1", 00:14:22.205 "is_configured": true, 00:14:22.205 "data_offset": 0, 00:14:22.205 "data_size": 65536 00:14:22.205 } 00:14:22.205 ] 00:14:22.205 }' 00:14:22.205 10:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.205 10:59:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.771 102.00 IOPS, 306.00 MiB/s [2024-11-15T10:59:29.699Z] 10:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:22.771 10:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.771 10:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:22.771 10:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:22.771 10:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.771 10:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.771 10:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.771 10:59:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.771 10:59:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.771 10:59:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.771 10:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.771 "name": "raid_bdev1", 00:14:22.771 "uuid": "907615a6-457e-4317-b961-229fdf5fa7e6", 00:14:22.771 "strip_size_kb": 0, 00:14:22.771 "state": "online", 00:14:22.771 "raid_level": "raid1", 00:14:22.771 "superblock": false, 00:14:22.771 "num_base_bdevs": 4, 00:14:22.771 "num_base_bdevs_discovered": 3, 00:14:22.771 "num_base_bdevs_operational": 3, 00:14:22.771 "base_bdevs_list": [ 00:14:22.771 { 00:14:22.771 "name": null, 00:14:22.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.771 "is_configured": false, 00:14:22.771 "data_offset": 0, 00:14:22.771 "data_size": 65536 00:14:22.771 }, 00:14:22.771 { 00:14:22.771 "name": "BaseBdev2", 00:14:22.771 "uuid": "8501292e-8298-5632-af79-d2efdb28f9fc", 00:14:22.771 "is_configured": true, 00:14:22.771 "data_offset": 0, 00:14:22.771 "data_size": 65536 00:14:22.771 }, 00:14:22.771 { 00:14:22.771 "name": "BaseBdev3", 00:14:22.771 "uuid": "93b3cb1f-f602-5486-8336-a64758631ab5", 00:14:22.771 "is_configured": true, 00:14:22.771 "data_offset": 0, 00:14:22.771 "data_size": 65536 00:14:22.771 }, 00:14:22.771 { 00:14:22.771 "name": "BaseBdev4", 00:14:22.771 "uuid": "95409ab5-a969-5812-b6dd-1961b2b680a1", 00:14:22.771 "is_configured": true, 00:14:22.771 "data_offset": 0, 00:14:22.771 "data_size": 65536 00:14:22.771 } 00:14:22.771 ] 00:14:22.771 }' 00:14:22.771 10:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.771 10:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:22.771 10:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.771 10:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:22.771 10:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:22.771 10:59:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.771 10:59:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.771 [2024-11-15 10:59:29.548715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:22.771 10:59:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.771 10:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:22.771 [2024-11-15 10:59:29.617513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:22.771 [2024-11-15 10:59:29.619484] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:23.029 [2024-11-15 10:59:29.726524] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:23.029 [2024-11-15 10:59:29.727979] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:23.287 [2024-11-15 10:59:29.991973] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:23.854 133.33 IOPS, 400.00 MiB/s [2024-11-15T10:59:30.782Z] [2024-11-15 10:59:30.482702] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:23.854 [2024-11-15 10:59:30.483443] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:23.854 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.854 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.854 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.854 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.854 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.854 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.854 10:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.854 10:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.854 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.854 10:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.854 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.854 "name": "raid_bdev1", 00:14:23.854 "uuid": "907615a6-457e-4317-b961-229fdf5fa7e6", 00:14:23.854 "strip_size_kb": 0, 00:14:23.854 "state": "online", 00:14:23.854 "raid_level": "raid1", 00:14:23.854 "superblock": false, 00:14:23.854 "num_base_bdevs": 4, 00:14:23.854 "num_base_bdevs_discovered": 4, 00:14:23.854 "num_base_bdevs_operational": 4, 00:14:23.854 "process": { 00:14:23.854 "type": "rebuild", 00:14:23.854 "target": "spare", 00:14:23.854 "progress": { 00:14:23.854 "blocks": 10240, 00:14:23.854 "percent": 15 00:14:23.854 } 00:14:23.854 }, 00:14:23.854 "base_bdevs_list": [ 00:14:23.854 { 00:14:23.854 "name": "spare", 00:14:23.854 "uuid": "2ecde64e-b982-5efa-9ca7-057d67e1ce29", 00:14:23.854 "is_configured": true, 00:14:23.854 "data_offset": 0, 00:14:23.854 "data_size": 65536 00:14:23.854 }, 00:14:23.854 { 00:14:23.854 "name": "BaseBdev2", 00:14:23.854 "uuid": "8501292e-8298-5632-af79-d2efdb28f9fc", 00:14:23.854 "is_configured": true, 00:14:23.854 "data_offset": 0, 00:14:23.854 "data_size": 65536 00:14:23.854 }, 00:14:23.854 { 00:14:23.854 "name": "BaseBdev3", 00:14:23.854 "uuid": "93b3cb1f-f602-5486-8336-a64758631ab5", 00:14:23.854 "is_configured": true, 00:14:23.854 "data_offset": 0, 00:14:23.854 "data_size": 65536 00:14:23.854 }, 00:14:23.854 { 00:14:23.854 "name": "BaseBdev4", 00:14:23.854 "uuid": "95409ab5-a969-5812-b6dd-1961b2b680a1", 00:14:23.854 "is_configured": true, 00:14:23.854 "data_offset": 0, 00:14:23.854 "data_size": 65536 00:14:23.854 } 00:14:23.854 ] 00:14:23.854 }' 00:14:23.854 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.854 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:23.854 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.854 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:23.854 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:23.854 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:23.854 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:23.854 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:23.854 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:23.854 10:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.854 10:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.854 [2024-11-15 10:59:30.740984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:24.114 [2024-11-15 10:59:30.828577] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:24.114 [2024-11-15 10:59:30.829946] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:24.114 [2024-11-15 10:59:30.829979] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:24.114 10:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.114 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:24.114 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:24.114 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.114 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.114 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.114 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.114 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.114 [2024-11-15 10:59:30.843687] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:24.114 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.114 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.114 10:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.114 10:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.114 10:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.114 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.114 "name": "raid_bdev1", 00:14:24.114 "uuid": "907615a6-457e-4317-b961-229fdf5fa7e6", 00:14:24.114 "strip_size_kb": 0, 00:14:24.114 "state": "online", 00:14:24.114 "raid_level": "raid1", 00:14:24.114 "superblock": false, 00:14:24.114 "num_base_bdevs": 4, 00:14:24.114 "num_base_bdevs_discovered": 3, 00:14:24.114 "num_base_bdevs_operational": 3, 00:14:24.114 "process": { 00:14:24.114 "type": "rebuild", 00:14:24.114 "target": "spare", 00:14:24.114 "progress": { 00:14:24.114 "blocks": 14336, 00:14:24.114 "percent": 21 00:14:24.114 } 00:14:24.114 }, 00:14:24.114 "base_bdevs_list": [ 00:14:24.114 { 00:14:24.114 "name": "spare", 00:14:24.114 "uuid": "2ecde64e-b982-5efa-9ca7-057d67e1ce29", 00:14:24.114 "is_configured": true, 00:14:24.114 "data_offset": 0, 00:14:24.114 "data_size": 65536 00:14:24.114 }, 00:14:24.114 { 00:14:24.114 "name": null, 00:14:24.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.114 "is_configured": false, 00:14:24.114 "data_offset": 0, 00:14:24.114 "data_size": 65536 00:14:24.114 }, 00:14:24.114 { 00:14:24.114 "name": "BaseBdev3", 00:14:24.114 "uuid": "93b3cb1f-f602-5486-8336-a64758631ab5", 00:14:24.114 "is_configured": true, 00:14:24.114 "data_offset": 0, 00:14:24.114 "data_size": 65536 00:14:24.114 }, 00:14:24.114 { 00:14:24.114 "name": "BaseBdev4", 00:14:24.114 "uuid": "95409ab5-a969-5812-b6dd-1961b2b680a1", 00:14:24.114 "is_configured": true, 00:14:24.114 "data_offset": 0, 00:14:24.114 "data_size": 65536 00:14:24.114 } 00:14:24.114 ] 00:14:24.114 }' 00:14:24.114 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.114 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:24.114 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.114 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.115 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=489 00:14:24.115 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:24.115 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.115 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.115 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.115 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.115 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.115 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.115 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.115 10:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.115 10:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.115 10:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.115 10:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.115 "name": "raid_bdev1", 00:14:24.115 "uuid": "907615a6-457e-4317-b961-229fdf5fa7e6", 00:14:24.115 "strip_size_kb": 0, 00:14:24.115 "state": "online", 00:14:24.115 "raid_level": "raid1", 00:14:24.115 "superblock": false, 00:14:24.115 "num_base_bdevs": 4, 00:14:24.115 "num_base_bdevs_discovered": 3, 00:14:24.115 "num_base_bdevs_operational": 3, 00:14:24.115 "process": { 00:14:24.115 "type": "rebuild", 00:14:24.115 "target": "spare", 00:14:24.115 "progress": { 00:14:24.115 "blocks": 14336, 00:14:24.115 "percent": 21 00:14:24.115 } 00:14:24.115 }, 00:14:24.115 "base_bdevs_list": [ 00:14:24.115 { 00:14:24.115 "name": "spare", 00:14:24.115 "uuid": "2ecde64e-b982-5efa-9ca7-057d67e1ce29", 00:14:24.115 "is_configured": true, 00:14:24.115 "data_offset": 0, 00:14:24.115 "data_size": 65536 00:14:24.115 }, 00:14:24.115 { 00:14:24.115 "name": null, 00:14:24.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.115 "is_configured": false, 00:14:24.115 "data_offset": 0, 00:14:24.115 "data_size": 65536 00:14:24.115 }, 00:14:24.115 { 00:14:24.115 "name": "BaseBdev3", 00:14:24.115 "uuid": "93b3cb1f-f602-5486-8336-a64758631ab5", 00:14:24.115 "is_configured": true, 00:14:24.115 "data_offset": 0, 00:14:24.115 "data_size": 65536 00:14:24.115 }, 00:14:24.115 { 00:14:24.115 "name": "BaseBdev4", 00:14:24.115 "uuid": "95409ab5-a969-5812-b6dd-1961b2b680a1", 00:14:24.115 "is_configured": true, 00:14:24.115 "data_offset": 0, 00:14:24.115 "data_size": 65536 00:14:24.115 } 00:14:24.115 ] 00:14:24.115 }' 00:14:24.115 10:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.115 10:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:24.115 10:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.394 [2024-11-15 10:59:31.082109] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:24.394 10:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.394 10:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:24.652 118.75 IOPS, 356.25 MiB/s [2024-11-15T10:59:31.580Z] [2024-11-15 10:59:31.323409] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:24.652 [2024-11-15 10:59:31.451395] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:24.652 [2024-11-15 10:59:31.451986] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:25.221 [2024-11-15 10:59:31.857776] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:25.221 10:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:25.221 10:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.221 10:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.221 10:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.221 10:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.221 10:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.221 104.00 IOPS, 312.00 MiB/s [2024-11-15T10:59:32.149Z] 10:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.221 10:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.221 10:59:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.221 10:59:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.221 10:59:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.221 10:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.221 "name": "raid_bdev1", 00:14:25.221 "uuid": "907615a6-457e-4317-b961-229fdf5fa7e6", 00:14:25.221 "strip_size_kb": 0, 00:14:25.221 "state": "online", 00:14:25.221 "raid_level": "raid1", 00:14:25.221 "superblock": false, 00:14:25.221 "num_base_bdevs": 4, 00:14:25.221 "num_base_bdevs_discovered": 3, 00:14:25.221 "num_base_bdevs_operational": 3, 00:14:25.221 "process": { 00:14:25.221 "type": "rebuild", 00:14:25.221 "target": "spare", 00:14:25.221 "progress": { 00:14:25.221 "blocks": 30720, 00:14:25.221 "percent": 46 00:14:25.221 } 00:14:25.221 }, 00:14:25.221 "base_bdevs_list": [ 00:14:25.221 { 00:14:25.221 "name": "spare", 00:14:25.221 "uuid": "2ecde64e-b982-5efa-9ca7-057d67e1ce29", 00:14:25.221 "is_configured": true, 00:14:25.221 "data_offset": 0, 00:14:25.221 "data_size": 65536 00:14:25.221 }, 00:14:25.221 { 00:14:25.221 "name": null, 00:14:25.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.221 "is_configured": false, 00:14:25.221 "data_offset": 0, 00:14:25.221 "data_size": 65536 00:14:25.221 }, 00:14:25.221 { 00:14:25.221 "name": "BaseBdev3", 00:14:25.221 "uuid": "93b3cb1f-f602-5486-8336-a64758631ab5", 00:14:25.221 "is_configured": true, 00:14:25.221 "data_offset": 0, 00:14:25.221 "data_size": 65536 00:14:25.221 }, 00:14:25.221 { 00:14:25.221 "name": "BaseBdev4", 00:14:25.221 "uuid": "95409ab5-a969-5812-b6dd-1961b2b680a1", 00:14:25.221 "is_configured": true, 00:14:25.221 "data_offset": 0, 00:14:25.221 "data_size": 65536 00:14:25.221 } 00:14:25.221 ] 00:14:25.221 }' 00:14:25.221 10:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.481 [2024-11-15 10:59:32.181773] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:25.481 10:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:25.481 10:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.481 10:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:25.481 10:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:25.481 [2024-11-15 10:59:32.385499] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:25.481 [2024-11-15 10:59:32.385859] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:26.417 92.83 IOPS, 278.50 MiB/s [2024-11-15T10:59:33.345Z] [2024-11-15 10:59:33.143145] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:26.417 10:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:26.417 10:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.417 10:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.417 10:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.417 10:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.417 10:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.417 10:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.417 10:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.417 10:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.417 10:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.417 10:59:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.417 10:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.417 "name": "raid_bdev1", 00:14:26.417 "uuid": "907615a6-457e-4317-b961-229fdf5fa7e6", 00:14:26.417 "strip_size_kb": 0, 00:14:26.417 "state": "online", 00:14:26.417 "raid_level": "raid1", 00:14:26.417 "superblock": false, 00:14:26.417 "num_base_bdevs": 4, 00:14:26.417 "num_base_bdevs_discovered": 3, 00:14:26.417 "num_base_bdevs_operational": 3, 00:14:26.417 "process": { 00:14:26.417 "type": "rebuild", 00:14:26.417 "target": "spare", 00:14:26.417 "progress": { 00:14:26.417 "blocks": 47104, 00:14:26.417 "percent": 71 00:14:26.417 } 00:14:26.417 }, 00:14:26.417 "base_bdevs_list": [ 00:14:26.417 { 00:14:26.417 "name": "spare", 00:14:26.417 "uuid": "2ecde64e-b982-5efa-9ca7-057d67e1ce29", 00:14:26.417 "is_configured": true, 00:14:26.417 "data_offset": 0, 00:14:26.417 "data_size": 65536 00:14:26.417 }, 00:14:26.417 { 00:14:26.417 "name": null, 00:14:26.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.417 "is_configured": false, 00:14:26.417 "data_offset": 0, 00:14:26.417 "data_size": 65536 00:14:26.417 }, 00:14:26.417 { 00:14:26.417 "name": "BaseBdev3", 00:14:26.417 "uuid": "93b3cb1f-f602-5486-8336-a64758631ab5", 00:14:26.417 "is_configured": true, 00:14:26.417 "data_offset": 0, 00:14:26.417 "data_size": 65536 00:14:26.417 }, 00:14:26.417 { 00:14:26.417 "name": "BaseBdev4", 00:14:26.417 "uuid": "95409ab5-a969-5812-b6dd-1961b2b680a1", 00:14:26.417 "is_configured": true, 00:14:26.417 "data_offset": 0, 00:14:26.417 "data_size": 65536 00:14:26.417 } 00:14:26.417 ] 00:14:26.417 }' 00:14:26.417 10:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.417 10:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.417 10:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.675 10:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.675 10:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:27.243 84.14 IOPS, 252.43 MiB/s [2024-11-15T10:59:34.171Z] [2024-11-15 10:59:34.129127] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:27.502 [2024-11-15 10:59:34.234400] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:27.502 [2024-11-15 10:59:34.237024] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.502 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:27.502 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.502 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.502 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.502 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.502 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.502 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.502 10:59:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.502 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.502 10:59:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.502 10:59:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.502 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.502 "name": "raid_bdev1", 00:14:27.502 "uuid": "907615a6-457e-4317-b961-229fdf5fa7e6", 00:14:27.502 "strip_size_kb": 0, 00:14:27.502 "state": "online", 00:14:27.502 "raid_level": "raid1", 00:14:27.502 "superblock": false, 00:14:27.502 "num_base_bdevs": 4, 00:14:27.502 "num_base_bdevs_discovered": 3, 00:14:27.502 "num_base_bdevs_operational": 3, 00:14:27.502 "base_bdevs_list": [ 00:14:27.502 { 00:14:27.502 "name": "spare", 00:14:27.502 "uuid": "2ecde64e-b982-5efa-9ca7-057d67e1ce29", 00:14:27.502 "is_configured": true, 00:14:27.502 "data_offset": 0, 00:14:27.502 "data_size": 65536 00:14:27.502 }, 00:14:27.502 { 00:14:27.502 "name": null, 00:14:27.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.502 "is_configured": false, 00:14:27.502 "data_offset": 0, 00:14:27.502 "data_size": 65536 00:14:27.502 }, 00:14:27.502 { 00:14:27.502 "name": "BaseBdev3", 00:14:27.502 "uuid": "93b3cb1f-f602-5486-8336-a64758631ab5", 00:14:27.502 "is_configured": true, 00:14:27.502 "data_offset": 0, 00:14:27.502 "data_size": 65536 00:14:27.502 }, 00:14:27.502 { 00:14:27.502 "name": "BaseBdev4", 00:14:27.502 "uuid": "95409ab5-a969-5812-b6dd-1961b2b680a1", 00:14:27.502 "is_configured": true, 00:14:27.502 "data_offset": 0, 00:14:27.502 "data_size": 65536 00:14:27.502 } 00:14:27.502 ] 00:14:27.502 }' 00:14:27.502 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.761 "name": "raid_bdev1", 00:14:27.761 "uuid": "907615a6-457e-4317-b961-229fdf5fa7e6", 00:14:27.761 "strip_size_kb": 0, 00:14:27.761 "state": "online", 00:14:27.761 "raid_level": "raid1", 00:14:27.761 "superblock": false, 00:14:27.761 "num_base_bdevs": 4, 00:14:27.761 "num_base_bdevs_discovered": 3, 00:14:27.761 "num_base_bdevs_operational": 3, 00:14:27.761 "base_bdevs_list": [ 00:14:27.761 { 00:14:27.761 "name": "spare", 00:14:27.761 "uuid": "2ecde64e-b982-5efa-9ca7-057d67e1ce29", 00:14:27.761 "is_configured": true, 00:14:27.761 "data_offset": 0, 00:14:27.761 "data_size": 65536 00:14:27.761 }, 00:14:27.761 { 00:14:27.761 "name": null, 00:14:27.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.761 "is_configured": false, 00:14:27.761 "data_offset": 0, 00:14:27.761 "data_size": 65536 00:14:27.761 }, 00:14:27.761 { 00:14:27.761 "name": "BaseBdev3", 00:14:27.761 "uuid": "93b3cb1f-f602-5486-8336-a64758631ab5", 00:14:27.761 "is_configured": true, 00:14:27.761 "data_offset": 0, 00:14:27.761 "data_size": 65536 00:14:27.761 }, 00:14:27.761 { 00:14:27.761 "name": "BaseBdev4", 00:14:27.761 "uuid": "95409ab5-a969-5812-b6dd-1961b2b680a1", 00:14:27.761 "is_configured": true, 00:14:27.761 "data_offset": 0, 00:14:27.761 "data_size": 65536 00:14:27.761 } 00:14:27.761 ] 00:14:27.761 }' 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.761 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.761 "name": "raid_bdev1", 00:14:27.761 "uuid": "907615a6-457e-4317-b961-229fdf5fa7e6", 00:14:27.761 "strip_size_kb": 0, 00:14:27.761 "state": "online", 00:14:27.761 "raid_level": "raid1", 00:14:27.761 "superblock": false, 00:14:27.761 "num_base_bdevs": 4, 00:14:27.761 "num_base_bdevs_discovered": 3, 00:14:27.761 "num_base_bdevs_operational": 3, 00:14:27.761 "base_bdevs_list": [ 00:14:27.761 { 00:14:27.762 "name": "spare", 00:14:27.762 "uuid": "2ecde64e-b982-5efa-9ca7-057d67e1ce29", 00:14:27.762 "is_configured": true, 00:14:27.762 "data_offset": 0, 00:14:27.762 "data_size": 65536 00:14:27.762 }, 00:14:27.762 { 00:14:27.762 "name": null, 00:14:27.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.762 "is_configured": false, 00:14:27.762 "data_offset": 0, 00:14:27.762 "data_size": 65536 00:14:27.762 }, 00:14:27.762 { 00:14:27.762 "name": "BaseBdev3", 00:14:27.762 "uuid": "93b3cb1f-f602-5486-8336-a64758631ab5", 00:14:27.762 "is_configured": true, 00:14:27.762 "data_offset": 0, 00:14:27.762 "data_size": 65536 00:14:27.762 }, 00:14:27.762 { 00:14:27.762 "name": "BaseBdev4", 00:14:27.762 "uuid": "95409ab5-a969-5812-b6dd-1961b2b680a1", 00:14:27.762 "is_configured": true, 00:14:27.762 "data_offset": 0, 00:14:27.762 "data_size": 65536 00:14:27.762 } 00:14:27.762 ] 00:14:27.762 }' 00:14:27.762 10:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.762 10:59:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.329 77.50 IOPS, 232.50 MiB/s [2024-11-15T10:59:35.257Z] 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:28.329 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.329 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.329 [2024-11-15 10:59:35.115666] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:28.329 [2024-11-15 10:59:35.115700] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:28.329 00:14:28.329 Latency(us) 00:14:28.329 [2024-11-15T10:59:35.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.329 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:28.329 raid_bdev1 : 8.13 76.60 229.79 0.00 0.00 18015.32 320.17 117220.72 00:14:28.329 [2024-11-15T10:59:35.257Z] =================================================================================================================== 00:14:28.329 [2024-11-15T10:59:35.257Z] Total : 76.60 229.79 0.00 0.00 18015.32 320.17 117220.72 00:14:28.329 { 00:14:28.329 "results": [ 00:14:28.329 { 00:14:28.329 "job": "raid_bdev1", 00:14:28.329 "core_mask": "0x1", 00:14:28.329 "workload": "randrw", 00:14:28.329 "percentage": 50, 00:14:28.329 "status": "finished", 00:14:28.329 "queue_depth": 2, 00:14:28.329 "io_size": 3145728, 00:14:28.329 "runtime": 8.133551, 00:14:28.329 "iops": 76.59631076266689, 00:14:28.329 "mibps": 229.78893228800067, 00:14:28.329 "io_failed": 0, 00:14:28.329 "io_timeout": 0, 00:14:28.330 "avg_latency_us": 18015.32436793372, 00:14:28.330 "min_latency_us": 320.16768558951964, 00:14:28.330 "max_latency_us": 117220.7231441048 00:14:28.330 } 00:14:28.330 ], 00:14:28.330 "core_count": 1 00:14:28.330 } 00:14:28.330 [2024-11-15 10:59:35.221436] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.330 [2024-11-15 10:59:35.221488] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:28.330 [2024-11-15 10:59:35.221589] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:28.330 [2024-11-15 10:59:35.221602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:28.330 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.330 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:28.330 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.330 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.330 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.330 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.589 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:28.589 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:28.589 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:28.589 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:28.589 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:28.589 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:28.589 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:28.589 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:28.589 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:28.589 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:28.589 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:28.589 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:28.589 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:28.589 /dev/nbd0 00:14:28.589 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:28.589 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:28.589 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:28.589 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:14:28.589 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:28.589 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:28.589 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:28.589 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:14:28.589 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:28.589 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:28.589 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:28.589 1+0 records in 00:14:28.589 1+0 records out 00:14:28.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410741 s, 10.0 MB/s 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:28.849 /dev/nbd1 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:28.849 1+0 records in 00:14:28.849 1+0 records out 00:14:28.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353158 s, 11.6 MB/s 00:14:28.849 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.108 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:14:29.108 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.108 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:29.108 10:59:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:14:29.108 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:29.108 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:29.108 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:29.108 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:29.108 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:29.108 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:29.108 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:29.108 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:29.108 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:29.108 10:59:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:29.366 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:29.366 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:29.366 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:29.366 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:29.366 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:29.366 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:29.366 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:29.366 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:29.366 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:29.366 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:29.366 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:29.366 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:29.366 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:29.366 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:29.366 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:29.366 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:29.366 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:29.366 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:29.367 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:29.367 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:29.625 /dev/nbd1 00:14:29.625 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:29.625 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:29.625 10:59:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:29.625 10:59:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:14:29.625 10:59:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:29.625 10:59:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:29.625 10:59:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:29.625 10:59:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:14:29.625 10:59:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:29.625 10:59:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:29.625 10:59:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:29.625 1+0 records in 00:14:29.625 1+0 records out 00:14:29.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020276 s, 20.2 MB/s 00:14:29.625 10:59:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.625 10:59:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:14:29.625 10:59:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.625 10:59:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:29.625 10:59:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:14:29.625 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:29.625 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:29.625 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:29.625 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:29.625 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:29.625 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:29.625 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:29.625 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:29.625 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:29.625 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:29.893 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:29.893 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:29.893 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:29.893 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:29.893 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:29.893 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:29.893 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:29.893 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:29.893 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:29.893 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:29.893 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:29.893 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:29.893 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:29.893 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:29.893 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:30.153 10:59:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:30.153 10:59:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:30.153 10:59:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:30.153 10:59:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:30.153 10:59:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:30.153 10:59:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:30.153 10:59:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:30.153 10:59:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:30.153 10:59:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:30.153 10:59:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78922 00:14:30.153 10:59:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 78922 ']' 00:14:30.153 10:59:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 78922 00:14:30.153 10:59:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:14:30.153 10:59:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:30.153 10:59:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78922 00:14:30.153 10:59:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:30.153 10:59:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:30.153 killing process with pid 78922 00:14:30.153 10:59:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78922' 00:14:30.153 10:59:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 78922 00:14:30.153 Received shutdown signal, test time was about 9.979506 seconds 00:14:30.153 00:14:30.153 Latency(us) 00:14:30.153 [2024-11-15T10:59:37.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.153 [2024-11-15T10:59:37.081Z] =================================================================================================================== 00:14:30.153 [2024-11-15T10:59:37.081Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:30.153 [2024-11-15 10:59:37.041865] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:30.153 10:59:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 78922 00:14:30.720 [2024-11-15 10:59:37.460450] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:32.098 00:14:32.098 real 0m13.482s 00:14:32.098 user 0m17.142s 00:14:32.098 sys 0m1.761s 00:14:32.098 ************************************ 00:14:32.098 END TEST raid_rebuild_test_io 00:14:32.098 ************************************ 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.098 10:59:38 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:32.098 10:59:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:32.098 10:59:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:32.098 10:59:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:32.098 ************************************ 00:14:32.098 START TEST raid_rebuild_test_sb_io 00:14:32.098 ************************************ 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true true true 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79332 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79332 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 79332 ']' 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:32.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:32.098 10:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.098 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:32.098 Zero copy mechanism will not be used. 00:14:32.098 [2024-11-15 10:59:38.780151] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:14:32.098 [2024-11-15 10:59:38.780270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79332 ] 00:14:32.098 [2024-11-15 10:59:38.954966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.357 [2024-11-15 10:59:39.070823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.357 [2024-11-15 10:59:39.267317] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:32.357 [2024-11-15 10:59:39.267366] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.925 BaseBdev1_malloc 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.925 [2024-11-15 10:59:39.665062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:32.925 [2024-11-15 10:59:39.665132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.925 [2024-11-15 10:59:39.665157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:32.925 [2024-11-15 10:59:39.665168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.925 [2024-11-15 10:59:39.667164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.925 [2024-11-15 10:59:39.667205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:32.925 BaseBdev1 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.925 BaseBdev2_malloc 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.925 [2024-11-15 10:59:39.718379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:32.925 [2024-11-15 10:59:39.718434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.925 [2024-11-15 10:59:39.718453] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:32.925 [2024-11-15 10:59:39.718467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.925 [2024-11-15 10:59:39.720428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.925 [2024-11-15 10:59:39.720488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:32.925 BaseBdev2 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.925 BaseBdev3_malloc 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.925 [2024-11-15 10:59:39.790699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:32.925 [2024-11-15 10:59:39.790778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.925 [2024-11-15 10:59:39.790801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:32.925 [2024-11-15 10:59:39.790812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.925 [2024-11-15 10:59:39.792793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.925 [2024-11-15 10:59:39.792835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:32.925 BaseBdev3 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.925 BaseBdev4_malloc 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.925 [2024-11-15 10:59:39.845339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:32.925 [2024-11-15 10:59:39.845386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.925 [2024-11-15 10:59:39.845405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:32.925 [2024-11-15 10:59:39.845416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.925 [2024-11-15 10:59:39.847378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.925 [2024-11-15 10:59:39.847416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:32.925 BaseBdev4 00:14:32.925 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.185 spare_malloc 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.185 spare_delay 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.185 [2024-11-15 10:59:39.909615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:33.185 [2024-11-15 10:59:39.909671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.185 [2024-11-15 10:59:39.909691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:33.185 [2024-11-15 10:59:39.909702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.185 [2024-11-15 10:59:39.911697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.185 [2024-11-15 10:59:39.911736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:33.185 spare 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.185 [2024-11-15 10:59:39.921644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:33.185 [2024-11-15 10:59:39.923389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:33.185 [2024-11-15 10:59:39.923472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:33.185 [2024-11-15 10:59:39.923521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:33.185 [2024-11-15 10:59:39.923687] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:33.185 [2024-11-15 10:59:39.923711] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:33.185 [2024-11-15 10:59:39.923937] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:33.185 [2024-11-15 10:59:39.924110] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:33.185 [2024-11-15 10:59:39.924129] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:33.185 [2024-11-15 10:59:39.924271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.185 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.185 "name": "raid_bdev1", 00:14:33.185 "uuid": "95957d52-19ee-4ba7-a6ca-6fb2d3c23c71", 00:14:33.185 "strip_size_kb": 0, 00:14:33.185 "state": "online", 00:14:33.185 "raid_level": "raid1", 00:14:33.185 "superblock": true, 00:14:33.185 "num_base_bdevs": 4, 00:14:33.185 "num_base_bdevs_discovered": 4, 00:14:33.185 "num_base_bdevs_operational": 4, 00:14:33.185 "base_bdevs_list": [ 00:14:33.185 { 00:14:33.185 "name": "BaseBdev1", 00:14:33.185 "uuid": "baf66155-8479-5cef-a79c-3114ef784afa", 00:14:33.185 "is_configured": true, 00:14:33.185 "data_offset": 2048, 00:14:33.185 "data_size": 63488 00:14:33.185 }, 00:14:33.185 { 00:14:33.185 "name": "BaseBdev2", 00:14:33.185 "uuid": "d1ac273e-2299-5dbc-b394-713688b699f0", 00:14:33.185 "is_configured": true, 00:14:33.185 "data_offset": 2048, 00:14:33.185 "data_size": 63488 00:14:33.185 }, 00:14:33.185 { 00:14:33.185 "name": "BaseBdev3", 00:14:33.185 "uuid": "ffe43cb9-2cd0-53bd-a448-b789845c8b40", 00:14:33.185 "is_configured": true, 00:14:33.185 "data_offset": 2048, 00:14:33.185 "data_size": 63488 00:14:33.186 }, 00:14:33.186 { 00:14:33.186 "name": "BaseBdev4", 00:14:33.186 "uuid": "2d3c1925-67d7-5f70-af21-81460c2f0084", 00:14:33.186 "is_configured": true, 00:14:33.186 "data_offset": 2048, 00:14:33.186 "data_size": 63488 00:14:33.186 } 00:14:33.186 ] 00:14:33.186 }' 00:14:33.186 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.186 10:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.754 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:33.754 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.754 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:33.754 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.754 [2024-11-15 10:59:40.397214] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:33.754 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.754 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:33.754 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.754 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:33.754 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.754 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.754 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.754 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:33.754 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:33.754 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:33.754 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:33.754 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.754 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.754 [2024-11-15 10:59:40.496645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:33.754 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.754 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:33.754 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.754 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.754 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.754 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.754 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.754 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.754 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.754 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.754 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.755 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.755 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.755 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.755 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.755 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.755 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.755 "name": "raid_bdev1", 00:14:33.755 "uuid": "95957d52-19ee-4ba7-a6ca-6fb2d3c23c71", 00:14:33.755 "strip_size_kb": 0, 00:14:33.755 "state": "online", 00:14:33.755 "raid_level": "raid1", 00:14:33.755 "superblock": true, 00:14:33.755 "num_base_bdevs": 4, 00:14:33.755 "num_base_bdevs_discovered": 3, 00:14:33.755 "num_base_bdevs_operational": 3, 00:14:33.755 "base_bdevs_list": [ 00:14:33.755 { 00:14:33.755 "name": null, 00:14:33.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.755 "is_configured": false, 00:14:33.755 "data_offset": 0, 00:14:33.755 "data_size": 63488 00:14:33.755 }, 00:14:33.755 { 00:14:33.755 "name": "BaseBdev2", 00:14:33.755 "uuid": "d1ac273e-2299-5dbc-b394-713688b699f0", 00:14:33.755 "is_configured": true, 00:14:33.755 "data_offset": 2048, 00:14:33.755 "data_size": 63488 00:14:33.755 }, 00:14:33.755 { 00:14:33.755 "name": "BaseBdev3", 00:14:33.755 "uuid": "ffe43cb9-2cd0-53bd-a448-b789845c8b40", 00:14:33.755 "is_configured": true, 00:14:33.755 "data_offset": 2048, 00:14:33.755 "data_size": 63488 00:14:33.755 }, 00:14:33.755 { 00:14:33.755 "name": "BaseBdev4", 00:14:33.755 "uuid": "2d3c1925-67d7-5f70-af21-81460c2f0084", 00:14:33.755 "is_configured": true, 00:14:33.755 "data_offset": 2048, 00:14:33.755 "data_size": 63488 00:14:33.755 } 00:14:33.755 ] 00:14:33.755 }' 00:14:33.755 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.755 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.755 [2024-11-15 10:59:40.588531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:33.755 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:33.755 Zero copy mechanism will not be used. 00:14:33.755 Running I/O for 60 seconds... 00:14:34.323 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:34.323 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.323 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.323 [2024-11-15 10:59:40.952916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:34.323 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.323 10:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:34.323 [2024-11-15 10:59:41.019932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:34.323 [2024-11-15 10:59:41.021901] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:34.323 [2024-11-15 10:59:41.135920] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:34.323 [2024-11-15 10:59:41.137454] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:34.583 [2024-11-15 10:59:41.345907] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:34.583 [2024-11-15 10:59:41.346710] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:34.842 167.00 IOPS, 501.00 MiB/s [2024-11-15T10:59:41.770Z] [2024-11-15 10:59:41.757367] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:35.102 [2024-11-15 10:59:41.866716] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:35.102 [2024-11-15 10:59:41.867082] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:35.102 10:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.102 10:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.102 10:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.102 10:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.102 10:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.102 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.102 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.102 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.102 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.361 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.361 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.361 "name": "raid_bdev1", 00:14:35.361 "uuid": "95957d52-19ee-4ba7-a6ca-6fb2d3c23c71", 00:14:35.361 "strip_size_kb": 0, 00:14:35.361 "state": "online", 00:14:35.361 "raid_level": "raid1", 00:14:35.361 "superblock": true, 00:14:35.361 "num_base_bdevs": 4, 00:14:35.361 "num_base_bdevs_discovered": 4, 00:14:35.361 "num_base_bdevs_operational": 4, 00:14:35.361 "process": { 00:14:35.361 "type": "rebuild", 00:14:35.361 "target": "spare", 00:14:35.361 "progress": { 00:14:35.361 "blocks": 12288, 00:14:35.361 "percent": 19 00:14:35.361 } 00:14:35.361 }, 00:14:35.361 "base_bdevs_list": [ 00:14:35.361 { 00:14:35.361 "name": "spare", 00:14:35.361 "uuid": "b6c8fc97-18d1-52f2-b075-130ab7f59433", 00:14:35.361 "is_configured": true, 00:14:35.361 "data_offset": 2048, 00:14:35.361 "data_size": 63488 00:14:35.361 }, 00:14:35.361 { 00:14:35.361 "name": "BaseBdev2", 00:14:35.361 "uuid": "d1ac273e-2299-5dbc-b394-713688b699f0", 00:14:35.361 "is_configured": true, 00:14:35.361 "data_offset": 2048, 00:14:35.361 "data_size": 63488 00:14:35.361 }, 00:14:35.361 { 00:14:35.361 "name": "BaseBdev3", 00:14:35.361 "uuid": "ffe43cb9-2cd0-53bd-a448-b789845c8b40", 00:14:35.361 "is_configured": true, 00:14:35.361 "data_offset": 2048, 00:14:35.361 "data_size": 63488 00:14:35.361 }, 00:14:35.361 { 00:14:35.361 "name": "BaseBdev4", 00:14:35.361 "uuid": "2d3c1925-67d7-5f70-af21-81460c2f0084", 00:14:35.361 "is_configured": true, 00:14:35.361 "data_offset": 2048, 00:14:35.361 "data_size": 63488 00:14:35.361 } 00:14:35.361 ] 00:14:35.361 }' 00:14:35.361 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.361 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.361 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.361 [2024-11-15 10:59:42.107066] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:35.361 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.361 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:35.361 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.361 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.361 [2024-11-15 10:59:42.155071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:35.619 [2024-11-15 10:59:42.338546] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:35.619 [2024-11-15 10:59:42.349375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.619 [2024-11-15 10:59:42.349433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:35.619 [2024-11-15 10:59:42.349450] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:35.619 [2024-11-15 10:59:42.384158] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:35.619 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.619 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:35.619 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.619 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.620 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.620 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.620 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.620 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.620 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.620 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.620 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.620 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.620 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.620 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.620 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.620 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.620 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.620 "name": "raid_bdev1", 00:14:35.620 "uuid": "95957d52-19ee-4ba7-a6ca-6fb2d3c23c71", 00:14:35.620 "strip_size_kb": 0, 00:14:35.620 "state": "online", 00:14:35.620 "raid_level": "raid1", 00:14:35.620 "superblock": true, 00:14:35.620 "num_base_bdevs": 4, 00:14:35.620 "num_base_bdevs_discovered": 3, 00:14:35.620 "num_base_bdevs_operational": 3, 00:14:35.620 "base_bdevs_list": [ 00:14:35.620 { 00:14:35.620 "name": null, 00:14:35.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.620 "is_configured": false, 00:14:35.620 "data_offset": 0, 00:14:35.620 "data_size": 63488 00:14:35.620 }, 00:14:35.620 { 00:14:35.620 "name": "BaseBdev2", 00:14:35.620 "uuid": "d1ac273e-2299-5dbc-b394-713688b699f0", 00:14:35.620 "is_configured": true, 00:14:35.620 "data_offset": 2048, 00:14:35.620 "data_size": 63488 00:14:35.620 }, 00:14:35.620 { 00:14:35.620 "name": "BaseBdev3", 00:14:35.620 "uuid": "ffe43cb9-2cd0-53bd-a448-b789845c8b40", 00:14:35.620 "is_configured": true, 00:14:35.620 "data_offset": 2048, 00:14:35.620 "data_size": 63488 00:14:35.620 }, 00:14:35.620 { 00:14:35.620 "name": "BaseBdev4", 00:14:35.620 "uuid": "2d3c1925-67d7-5f70-af21-81460c2f0084", 00:14:35.620 "is_configured": true, 00:14:35.620 "data_offset": 2048, 00:14:35.620 "data_size": 63488 00:14:35.620 } 00:14:35.620 ] 00:14:35.620 }' 00:14:35.620 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.620 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.136 153.50 IOPS, 460.50 MiB/s [2024-11-15T10:59:43.064Z] 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:36.136 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.136 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:36.136 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:36.136 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.136 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.136 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.136 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.136 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.136 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.136 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.136 "name": "raid_bdev1", 00:14:36.136 "uuid": "95957d52-19ee-4ba7-a6ca-6fb2d3c23c71", 00:14:36.136 "strip_size_kb": 0, 00:14:36.136 "state": "online", 00:14:36.136 "raid_level": "raid1", 00:14:36.136 "superblock": true, 00:14:36.136 "num_base_bdevs": 4, 00:14:36.136 "num_base_bdevs_discovered": 3, 00:14:36.136 "num_base_bdevs_operational": 3, 00:14:36.136 "base_bdevs_list": [ 00:14:36.136 { 00:14:36.136 "name": null, 00:14:36.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.136 "is_configured": false, 00:14:36.136 "data_offset": 0, 00:14:36.136 "data_size": 63488 00:14:36.136 }, 00:14:36.136 { 00:14:36.136 "name": "BaseBdev2", 00:14:36.136 "uuid": "d1ac273e-2299-5dbc-b394-713688b699f0", 00:14:36.136 "is_configured": true, 00:14:36.136 "data_offset": 2048, 00:14:36.136 "data_size": 63488 00:14:36.136 }, 00:14:36.136 { 00:14:36.136 "name": "BaseBdev3", 00:14:36.136 "uuid": "ffe43cb9-2cd0-53bd-a448-b789845c8b40", 00:14:36.136 "is_configured": true, 00:14:36.136 "data_offset": 2048, 00:14:36.136 "data_size": 63488 00:14:36.136 }, 00:14:36.136 { 00:14:36.136 "name": "BaseBdev4", 00:14:36.136 "uuid": "2d3c1925-67d7-5f70-af21-81460c2f0084", 00:14:36.136 "is_configured": true, 00:14:36.136 "data_offset": 2048, 00:14:36.136 "data_size": 63488 00:14:36.136 } 00:14:36.136 ] 00:14:36.136 }' 00:14:36.136 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.136 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:36.136 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.136 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:36.136 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:36.136 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.136 10:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.136 [2024-11-15 10:59:43.001651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:36.136 10:59:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.136 10:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:36.136 [2024-11-15 10:59:43.058096] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:36.136 [2024-11-15 10:59:43.060223] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:36.395 [2024-11-15 10:59:43.178019] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:36.395 [2024-11-15 10:59:43.178626] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:36.395 [2024-11-15 10:59:43.300020] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:36.395 [2024-11-15 10:59:43.300825] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:36.963 160.33 IOPS, 481.00 MiB/s [2024-11-15T10:59:43.891Z] [2024-11-15 10:59:43.632068] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:36.963 [2024-11-15 10:59:43.874476] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:37.222 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.222 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.222 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.222 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.222 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.222 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.222 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.222 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.222 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.222 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.222 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.222 "name": "raid_bdev1", 00:14:37.222 "uuid": "95957d52-19ee-4ba7-a6ca-6fb2d3c23c71", 00:14:37.222 "strip_size_kb": 0, 00:14:37.222 "state": "online", 00:14:37.222 "raid_level": "raid1", 00:14:37.222 "superblock": true, 00:14:37.222 "num_base_bdevs": 4, 00:14:37.222 "num_base_bdevs_discovered": 4, 00:14:37.222 "num_base_bdevs_operational": 4, 00:14:37.222 "process": { 00:14:37.222 "type": "rebuild", 00:14:37.222 "target": "spare", 00:14:37.222 "progress": { 00:14:37.222 "blocks": 12288, 00:14:37.222 "percent": 19 00:14:37.222 } 00:14:37.222 }, 00:14:37.222 "base_bdevs_list": [ 00:14:37.222 { 00:14:37.222 "name": "spare", 00:14:37.222 "uuid": "b6c8fc97-18d1-52f2-b075-130ab7f59433", 00:14:37.222 "is_configured": true, 00:14:37.222 "data_offset": 2048, 00:14:37.222 "data_size": 63488 00:14:37.222 }, 00:14:37.222 { 00:14:37.222 "name": "BaseBdev2", 00:14:37.222 "uuid": "d1ac273e-2299-5dbc-b394-713688b699f0", 00:14:37.222 "is_configured": true, 00:14:37.222 "data_offset": 2048, 00:14:37.222 "data_size": 63488 00:14:37.222 }, 00:14:37.222 { 00:14:37.222 "name": "BaseBdev3", 00:14:37.223 "uuid": "ffe43cb9-2cd0-53bd-a448-b789845c8b40", 00:14:37.223 "is_configured": true, 00:14:37.223 "data_offset": 2048, 00:14:37.223 "data_size": 63488 00:14:37.223 }, 00:14:37.223 { 00:14:37.223 "name": "BaseBdev4", 00:14:37.223 "uuid": "2d3c1925-67d7-5f70-af21-81460c2f0084", 00:14:37.223 "is_configured": true, 00:14:37.223 "data_offset": 2048, 00:14:37.223 "data_size": 63488 00:14:37.223 } 00:14:37.223 ] 00:14:37.223 }' 00:14:37.223 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.223 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:37.223 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.482 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:37.482 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:37.482 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:37.482 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:37.482 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:37.482 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:37.482 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:37.482 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:37.482 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.482 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.482 [2024-11-15 10:59:44.204799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:37.482 [2024-11-15 10:59:44.224806] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:37.482 [2024-11-15 10:59:44.389766] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:37.482 [2024-11-15 10:59:44.389810] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:37.482 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.482 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:37.482 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:37.482 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.482 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.482 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.482 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.482 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.482 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.482 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.482 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.482 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.741 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.741 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.741 "name": "raid_bdev1", 00:14:37.741 "uuid": "95957d52-19ee-4ba7-a6ca-6fb2d3c23c71", 00:14:37.741 "strip_size_kb": 0, 00:14:37.741 "state": "online", 00:14:37.741 "raid_level": "raid1", 00:14:37.741 "superblock": true, 00:14:37.741 "num_base_bdevs": 4, 00:14:37.741 "num_base_bdevs_discovered": 3, 00:14:37.741 "num_base_bdevs_operational": 3, 00:14:37.741 "process": { 00:14:37.741 "type": "rebuild", 00:14:37.741 "target": "spare", 00:14:37.741 "progress": { 00:14:37.741 "blocks": 16384, 00:14:37.741 "percent": 25 00:14:37.741 } 00:14:37.741 }, 00:14:37.741 "base_bdevs_list": [ 00:14:37.741 { 00:14:37.741 "name": "spare", 00:14:37.741 "uuid": "b6c8fc97-18d1-52f2-b075-130ab7f59433", 00:14:37.741 "is_configured": true, 00:14:37.741 "data_offset": 2048, 00:14:37.741 "data_size": 63488 00:14:37.741 }, 00:14:37.741 { 00:14:37.741 "name": null, 00:14:37.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.741 "is_configured": false, 00:14:37.741 "data_offset": 0, 00:14:37.741 "data_size": 63488 00:14:37.741 }, 00:14:37.741 { 00:14:37.741 "name": "BaseBdev3", 00:14:37.741 "uuid": "ffe43cb9-2cd0-53bd-a448-b789845c8b40", 00:14:37.741 "is_configured": true, 00:14:37.741 "data_offset": 2048, 00:14:37.741 "data_size": 63488 00:14:37.741 }, 00:14:37.741 { 00:14:37.741 "name": "BaseBdev4", 00:14:37.741 "uuid": "2d3c1925-67d7-5f70-af21-81460c2f0084", 00:14:37.741 "is_configured": true, 00:14:37.741 "data_offset": 2048, 00:14:37.741 "data_size": 63488 00:14:37.741 } 00:14:37.741 ] 00:14:37.741 }' 00:14:37.741 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.741 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:37.741 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.741 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:37.741 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=503 00:14:37.741 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:37.741 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.741 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.741 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.741 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.741 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.741 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.741 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.741 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.741 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.741 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.741 144.00 IOPS, 432.00 MiB/s [2024-11-15T10:59:44.669Z] 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.741 "name": "raid_bdev1", 00:14:37.741 "uuid": "95957d52-19ee-4ba7-a6ca-6fb2d3c23c71", 00:14:37.741 "strip_size_kb": 0, 00:14:37.741 "state": "online", 00:14:37.741 "raid_level": "raid1", 00:14:37.741 "superblock": true, 00:14:37.741 "num_base_bdevs": 4, 00:14:37.741 "num_base_bdevs_discovered": 3, 00:14:37.741 "num_base_bdevs_operational": 3, 00:14:37.741 "process": { 00:14:37.741 "type": "rebuild", 00:14:37.741 "target": "spare", 00:14:37.741 "progress": { 00:14:37.741 "blocks": 18432, 00:14:37.741 "percent": 29 00:14:37.741 } 00:14:37.741 }, 00:14:37.741 "base_bdevs_list": [ 00:14:37.741 { 00:14:37.741 "name": "spare", 00:14:37.741 "uuid": "b6c8fc97-18d1-52f2-b075-130ab7f59433", 00:14:37.741 "is_configured": true, 00:14:37.741 "data_offset": 2048, 00:14:37.741 "data_size": 63488 00:14:37.741 }, 00:14:37.741 { 00:14:37.741 "name": null, 00:14:37.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.741 "is_configured": false, 00:14:37.741 "data_offset": 0, 00:14:37.741 "data_size": 63488 00:14:37.741 }, 00:14:37.741 { 00:14:37.741 "name": "BaseBdev3", 00:14:37.741 "uuid": "ffe43cb9-2cd0-53bd-a448-b789845c8b40", 00:14:37.741 "is_configured": true, 00:14:37.741 "data_offset": 2048, 00:14:37.741 "data_size": 63488 00:14:37.741 }, 00:14:37.741 { 00:14:37.741 "name": "BaseBdev4", 00:14:37.741 "uuid": "2d3c1925-67d7-5f70-af21-81460c2f0084", 00:14:37.741 "is_configured": true, 00:14:37.741 "data_offset": 2048, 00:14:37.741 "data_size": 63488 00:14:37.741 } 00:14:37.741 ] 00:14:37.741 }' 00:14:37.741 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.741 [2024-11-15 10:59:44.630454] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:37.741 [2024-11-15 10:59:44.630961] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:37.741 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:37.741 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.009 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:38.009 10:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:38.009 [2024-11-15 10:59:44.761408] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:38.576 [2024-11-15 10:59:45.442965] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:38.576 [2024-11-15 10:59:45.443500] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:38.833 126.20 IOPS, 378.60 MiB/s [2024-11-15T10:59:45.761Z] 10:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:38.833 10:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:38.833 10:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.833 10:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:38.833 10:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:38.833 10:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.833 10:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.833 10:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.833 10:59:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.833 10:59:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.833 10:59:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.117 10:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.117 "name": "raid_bdev1", 00:14:39.117 "uuid": "95957d52-19ee-4ba7-a6ca-6fb2d3c23c71", 00:14:39.117 "strip_size_kb": 0, 00:14:39.117 "state": "online", 00:14:39.117 "raid_level": "raid1", 00:14:39.117 "superblock": true, 00:14:39.117 "num_base_bdevs": 4, 00:14:39.117 "num_base_bdevs_discovered": 3, 00:14:39.117 "num_base_bdevs_operational": 3, 00:14:39.117 "process": { 00:14:39.117 "type": "rebuild", 00:14:39.117 "target": "spare", 00:14:39.117 "progress": { 00:14:39.117 "blocks": 36864, 00:14:39.117 "percent": 58 00:14:39.117 } 00:14:39.117 }, 00:14:39.117 "base_bdevs_list": [ 00:14:39.117 { 00:14:39.117 "name": "spare", 00:14:39.117 "uuid": "b6c8fc97-18d1-52f2-b075-130ab7f59433", 00:14:39.117 "is_configured": true, 00:14:39.117 "data_offset": 2048, 00:14:39.117 "data_size": 63488 00:14:39.117 }, 00:14:39.117 { 00:14:39.117 "name": null, 00:14:39.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.117 "is_configured": false, 00:14:39.117 "data_offset": 0, 00:14:39.117 "data_size": 63488 00:14:39.117 }, 00:14:39.117 { 00:14:39.117 "name": "BaseBdev3", 00:14:39.117 "uuid": "ffe43cb9-2cd0-53bd-a448-b789845c8b40", 00:14:39.117 "is_configured": true, 00:14:39.117 "data_offset": 2048, 00:14:39.117 "data_size": 63488 00:14:39.117 }, 00:14:39.117 { 00:14:39.117 "name": "BaseBdev4", 00:14:39.117 "uuid": "2d3c1925-67d7-5f70-af21-81460c2f0084", 00:14:39.117 "is_configured": true, 00:14:39.117 "data_offset": 2048, 00:14:39.117 "data_size": 63488 00:14:39.117 } 00:14:39.117 ] 00:14:39.117 }' 00:14:39.117 10:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.117 10:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.117 10:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.117 10:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.117 10:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:39.947 111.17 IOPS, 333.50 MiB/s [2024-11-15T10:59:46.875Z] 10:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:39.947 10:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.947 10:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.947 10:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.947 10:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.947 10:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.947 10:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.947 10:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.947 10:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.947 10:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.206 10:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.206 10:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.206 "name": "raid_bdev1", 00:14:40.206 "uuid": "95957d52-19ee-4ba7-a6ca-6fb2d3c23c71", 00:14:40.206 "strip_size_kb": 0, 00:14:40.206 "state": "online", 00:14:40.206 "raid_level": "raid1", 00:14:40.206 "superblock": true, 00:14:40.206 "num_base_bdevs": 4, 00:14:40.206 "num_base_bdevs_discovered": 3, 00:14:40.206 "num_base_bdevs_operational": 3, 00:14:40.206 "process": { 00:14:40.206 "type": "rebuild", 00:14:40.206 "target": "spare", 00:14:40.206 "progress": { 00:14:40.206 "blocks": 59392, 00:14:40.206 "percent": 93 00:14:40.206 } 00:14:40.206 }, 00:14:40.206 "base_bdevs_list": [ 00:14:40.206 { 00:14:40.206 "name": "spare", 00:14:40.206 "uuid": "b6c8fc97-18d1-52f2-b075-130ab7f59433", 00:14:40.206 "is_configured": true, 00:14:40.206 "data_offset": 2048, 00:14:40.206 "data_size": 63488 00:14:40.206 }, 00:14:40.206 { 00:14:40.206 "name": null, 00:14:40.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.206 "is_configured": false, 00:14:40.206 "data_offset": 0, 00:14:40.206 "data_size": 63488 00:14:40.206 }, 00:14:40.206 { 00:14:40.206 "name": "BaseBdev3", 00:14:40.206 "uuid": "ffe43cb9-2cd0-53bd-a448-b789845c8b40", 00:14:40.206 "is_configured": true, 00:14:40.206 "data_offset": 2048, 00:14:40.206 "data_size": 63488 00:14:40.206 }, 00:14:40.206 { 00:14:40.206 "name": "BaseBdev4", 00:14:40.206 "uuid": "2d3c1925-67d7-5f70-af21-81460c2f0084", 00:14:40.206 "is_configured": true, 00:14:40.206 "data_offset": 2048, 00:14:40.206 "data_size": 63488 00:14:40.206 } 00:14:40.206 ] 00:14:40.206 }' 00:14:40.206 10:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.207 10:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.207 10:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.207 10:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.207 10:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:40.207 [2024-11-15 10:59:47.070001] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:40.466 [2024-11-15 10:59:47.169767] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:40.466 [2024-11-15 10:59:47.171920] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.294 99.86 IOPS, 299.57 MiB/s [2024-11-15T10:59:48.222Z] 10:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:41.294 10:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.294 10:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.294 10:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.294 10:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.294 10:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.294 10:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.294 10:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.294 10:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.294 10:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.294 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.294 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.294 "name": "raid_bdev1", 00:14:41.294 "uuid": "95957d52-19ee-4ba7-a6ca-6fb2d3c23c71", 00:14:41.294 "strip_size_kb": 0, 00:14:41.294 "state": "online", 00:14:41.294 "raid_level": "raid1", 00:14:41.294 "superblock": true, 00:14:41.294 "num_base_bdevs": 4, 00:14:41.294 "num_base_bdevs_discovered": 3, 00:14:41.294 "num_base_bdevs_operational": 3, 00:14:41.294 "base_bdevs_list": [ 00:14:41.294 { 00:14:41.294 "name": "spare", 00:14:41.294 "uuid": "b6c8fc97-18d1-52f2-b075-130ab7f59433", 00:14:41.294 "is_configured": true, 00:14:41.294 "data_offset": 2048, 00:14:41.294 "data_size": 63488 00:14:41.294 }, 00:14:41.294 { 00:14:41.294 "name": null, 00:14:41.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.294 "is_configured": false, 00:14:41.294 "data_offset": 0, 00:14:41.294 "data_size": 63488 00:14:41.294 }, 00:14:41.294 { 00:14:41.294 "name": "BaseBdev3", 00:14:41.294 "uuid": "ffe43cb9-2cd0-53bd-a448-b789845c8b40", 00:14:41.294 "is_configured": true, 00:14:41.294 "data_offset": 2048, 00:14:41.294 "data_size": 63488 00:14:41.294 }, 00:14:41.294 { 00:14:41.294 "name": "BaseBdev4", 00:14:41.294 "uuid": "2d3c1925-67d7-5f70-af21-81460c2f0084", 00:14:41.294 "is_configured": true, 00:14:41.294 "data_offset": 2048, 00:14:41.294 "data_size": 63488 00:14:41.294 } 00:14:41.294 ] 00:14:41.294 }' 00:14:41.294 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.294 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:41.294 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.294 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:41.294 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:41.294 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:41.294 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.294 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:41.295 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:41.295 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.295 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.295 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.295 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.295 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.295 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.295 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.295 "name": "raid_bdev1", 00:14:41.295 "uuid": "95957d52-19ee-4ba7-a6ca-6fb2d3c23c71", 00:14:41.295 "strip_size_kb": 0, 00:14:41.295 "state": "online", 00:14:41.295 "raid_level": "raid1", 00:14:41.295 "superblock": true, 00:14:41.295 "num_base_bdevs": 4, 00:14:41.295 "num_base_bdevs_discovered": 3, 00:14:41.295 "num_base_bdevs_operational": 3, 00:14:41.295 "base_bdevs_list": [ 00:14:41.295 { 00:14:41.295 "name": "spare", 00:14:41.295 "uuid": "b6c8fc97-18d1-52f2-b075-130ab7f59433", 00:14:41.295 "is_configured": true, 00:14:41.295 "data_offset": 2048, 00:14:41.295 "data_size": 63488 00:14:41.295 }, 00:14:41.295 { 00:14:41.295 "name": null, 00:14:41.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.295 "is_configured": false, 00:14:41.295 "data_offset": 0, 00:14:41.295 "data_size": 63488 00:14:41.295 }, 00:14:41.295 { 00:14:41.295 "name": "BaseBdev3", 00:14:41.295 "uuid": "ffe43cb9-2cd0-53bd-a448-b789845c8b40", 00:14:41.295 "is_configured": true, 00:14:41.295 "data_offset": 2048, 00:14:41.295 "data_size": 63488 00:14:41.295 }, 00:14:41.295 { 00:14:41.295 "name": "BaseBdev4", 00:14:41.295 "uuid": "2d3c1925-67d7-5f70-af21-81460c2f0084", 00:14:41.295 "is_configured": true, 00:14:41.295 "data_offset": 2048, 00:14:41.295 "data_size": 63488 00:14:41.295 } 00:14:41.295 ] 00:14:41.295 }' 00:14:41.295 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.555 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:41.555 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.555 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:41.555 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:41.555 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.555 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.555 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.555 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.555 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.555 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.555 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.555 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.555 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.555 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.555 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.555 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.555 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.555 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.555 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.555 "name": "raid_bdev1", 00:14:41.555 "uuid": "95957d52-19ee-4ba7-a6ca-6fb2d3c23c71", 00:14:41.555 "strip_size_kb": 0, 00:14:41.555 "state": "online", 00:14:41.555 "raid_level": "raid1", 00:14:41.555 "superblock": true, 00:14:41.555 "num_base_bdevs": 4, 00:14:41.555 "num_base_bdevs_discovered": 3, 00:14:41.555 "num_base_bdevs_operational": 3, 00:14:41.555 "base_bdevs_list": [ 00:14:41.555 { 00:14:41.555 "name": "spare", 00:14:41.555 "uuid": "b6c8fc97-18d1-52f2-b075-130ab7f59433", 00:14:41.555 "is_configured": true, 00:14:41.555 "data_offset": 2048, 00:14:41.555 "data_size": 63488 00:14:41.555 }, 00:14:41.555 { 00:14:41.555 "name": null, 00:14:41.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.555 "is_configured": false, 00:14:41.555 "data_offset": 0, 00:14:41.555 "data_size": 63488 00:14:41.555 }, 00:14:41.555 { 00:14:41.555 "name": "BaseBdev3", 00:14:41.555 "uuid": "ffe43cb9-2cd0-53bd-a448-b789845c8b40", 00:14:41.555 "is_configured": true, 00:14:41.555 "data_offset": 2048, 00:14:41.555 "data_size": 63488 00:14:41.555 }, 00:14:41.555 { 00:14:41.555 "name": "BaseBdev4", 00:14:41.555 "uuid": "2d3c1925-67d7-5f70-af21-81460c2f0084", 00:14:41.555 "is_configured": true, 00:14:41.555 "data_offset": 2048, 00:14:41.555 "data_size": 63488 00:14:41.555 } 00:14:41.555 ] 00:14:41.555 }' 00:14:41.555 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.555 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.813 91.38 IOPS, 274.12 MiB/s [2024-11-15T10:59:48.741Z] 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:41.813 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.813 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.813 [2024-11-15 10:59:48.712404] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.813 [2024-11-15 10:59:48.712445] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:42.071 00:14:42.071 Latency(us) 00:14:42.071 [2024-11-15T10:59:48.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.071 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:42.071 raid_bdev1 : 8.23 89.88 269.63 0.00 0.00 15818.88 321.96 125462.81 00:14:42.071 [2024-11-15T10:59:48.999Z] =================================================================================================================== 00:14:42.071 [2024-11-15T10:59:48.999Z] Total : 89.88 269.63 0.00 0.00 15818.88 321.96 125462.81 00:14:42.071 [2024-11-15 10:59:48.829227] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.071 [2024-11-15 10:59:48.829279] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:42.071 [2024-11-15 10:59:48.829392] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:42.071 [2024-11-15 10:59:48.829408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:42.071 { 00:14:42.071 "results": [ 00:14:42.071 { 00:14:42.071 "job": "raid_bdev1", 00:14:42.071 "core_mask": "0x1", 00:14:42.071 "workload": "randrw", 00:14:42.071 "percentage": 50, 00:14:42.071 "status": "finished", 00:14:42.071 "queue_depth": 2, 00:14:42.071 "io_size": 3145728, 00:14:42.071 "runtime": 8.233421, 00:14:42.071 "iops": 89.87758551396801, 00:14:42.071 "mibps": 269.63275654190403, 00:14:42.071 "io_failed": 0, 00:14:42.071 "io_timeout": 0, 00:14:42.071 "avg_latency_us": 15818.880830874543, 00:14:42.071 "min_latency_us": 321.95633187772927, 00:14:42.071 "max_latency_us": 125462.80524017467 00:14:42.071 } 00:14:42.071 ], 00:14:42.071 "core_count": 1 00:14:42.071 } 00:14:42.071 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.071 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.071 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:42.072 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.072 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.072 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.072 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:42.072 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:42.072 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:42.072 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:42.072 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:42.072 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:42.072 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:42.072 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:42.072 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:42.072 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:42.072 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:42.072 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:42.072 10:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:42.331 /dev/nbd0 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:42.331 1+0 records in 00:14:42.331 1+0 records out 00:14:42.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411875 s, 9.9 MB/s 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:42.331 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:42.589 /dev/nbd1 00:14:42.589 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:42.589 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:42.589 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:42.589 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:14:42.589 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:42.589 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:42.589 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:42.589 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:14:42.589 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:42.589 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:42.589 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:42.589 1+0 records in 00:14:42.589 1+0 records out 00:14:42.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355871 s, 11.5 MB/s 00:14:42.589 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:42.589 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:14:42.589 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:42.589 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:42.589 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:14:42.589 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:42.589 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:42.589 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:42.847 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:42.847 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:42.847 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:42.847 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:42.847 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:42.847 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:42.847 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:42.847 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:43.106 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:43.106 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:43.106 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:43.106 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:43.106 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:43.106 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:43.106 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:43.106 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:43.106 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:43.106 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:43.106 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:43.106 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:43.106 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:43.106 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:43.106 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:43.106 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:43.106 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:43.106 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:43.106 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:43.106 /dev/nbd1 00:14:43.106 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:43.106 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:43.107 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:43.107 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:14:43.107 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:43.107 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:43.107 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:43.107 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:14:43.107 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:43.107 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:43.107 10:59:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:43.107 1+0 records in 00:14:43.107 1+0 records out 00:14:43.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402727 s, 10.2 MB/s 00:14:43.107 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:43.107 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:14:43.107 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:43.107 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:43.107 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:14:43.107 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:43.107 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:43.107 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:43.365 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:43.365 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:43.365 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:43.365 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:43.365 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:43.365 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:43.365 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.624 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.624 [2024-11-15 10:59:50.543595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:43.624 [2024-11-15 10:59:50.543674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.624 [2024-11-15 10:59:50.543700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:43.624 [2024-11-15 10:59:50.543713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.624 [2024-11-15 10:59:50.546119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.624 [2024-11-15 10:59:50.546160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:43.624 [2024-11-15 10:59:50.546267] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:43.624 [2024-11-15 10:59:50.546418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:43.625 [2024-11-15 10:59:50.546593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:43.625 [2024-11-15 10:59:50.546713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:43.625 spare 00:14:43.625 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.625 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:43.625 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.625 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.884 [2024-11-15 10:59:50.646630] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:43.884 [2024-11-15 10:59:50.646705] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:43.884 [2024-11-15 10:59:50.647054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:14:43.884 [2024-11-15 10:59:50.647291] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:43.884 [2024-11-15 10:59:50.647326] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:43.884 [2024-11-15 10:59:50.647559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.884 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.884 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:43.884 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.884 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.884 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.884 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.884 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:43.884 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.884 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.884 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.884 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.884 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.884 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.884 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.884 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.884 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.884 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.884 "name": "raid_bdev1", 00:14:43.884 "uuid": "95957d52-19ee-4ba7-a6ca-6fb2d3c23c71", 00:14:43.884 "strip_size_kb": 0, 00:14:43.884 "state": "online", 00:14:43.884 "raid_level": "raid1", 00:14:43.884 "superblock": true, 00:14:43.884 "num_base_bdevs": 4, 00:14:43.884 "num_base_bdevs_discovered": 3, 00:14:43.884 "num_base_bdevs_operational": 3, 00:14:43.884 "base_bdevs_list": [ 00:14:43.884 { 00:14:43.884 "name": "spare", 00:14:43.884 "uuid": "b6c8fc97-18d1-52f2-b075-130ab7f59433", 00:14:43.884 "is_configured": true, 00:14:43.884 "data_offset": 2048, 00:14:43.884 "data_size": 63488 00:14:43.884 }, 00:14:43.884 { 00:14:43.884 "name": null, 00:14:43.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.884 "is_configured": false, 00:14:43.884 "data_offset": 2048, 00:14:43.884 "data_size": 63488 00:14:43.884 }, 00:14:43.884 { 00:14:43.884 "name": "BaseBdev3", 00:14:43.884 "uuid": "ffe43cb9-2cd0-53bd-a448-b789845c8b40", 00:14:43.884 "is_configured": true, 00:14:43.884 "data_offset": 2048, 00:14:43.884 "data_size": 63488 00:14:43.884 }, 00:14:43.884 { 00:14:43.884 "name": "BaseBdev4", 00:14:43.884 "uuid": "2d3c1925-67d7-5f70-af21-81460c2f0084", 00:14:43.884 "is_configured": true, 00:14:43.884 "data_offset": 2048, 00:14:43.884 "data_size": 63488 00:14:43.884 } 00:14:43.884 ] 00:14:43.884 }' 00:14:43.884 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.884 10:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.452 "name": "raid_bdev1", 00:14:44.452 "uuid": "95957d52-19ee-4ba7-a6ca-6fb2d3c23c71", 00:14:44.452 "strip_size_kb": 0, 00:14:44.452 "state": "online", 00:14:44.452 "raid_level": "raid1", 00:14:44.452 "superblock": true, 00:14:44.452 "num_base_bdevs": 4, 00:14:44.452 "num_base_bdevs_discovered": 3, 00:14:44.452 "num_base_bdevs_operational": 3, 00:14:44.452 "base_bdevs_list": [ 00:14:44.452 { 00:14:44.452 "name": "spare", 00:14:44.452 "uuid": "b6c8fc97-18d1-52f2-b075-130ab7f59433", 00:14:44.452 "is_configured": true, 00:14:44.452 "data_offset": 2048, 00:14:44.452 "data_size": 63488 00:14:44.452 }, 00:14:44.452 { 00:14:44.452 "name": null, 00:14:44.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.452 "is_configured": false, 00:14:44.452 "data_offset": 2048, 00:14:44.452 "data_size": 63488 00:14:44.452 }, 00:14:44.452 { 00:14:44.452 "name": "BaseBdev3", 00:14:44.452 "uuid": "ffe43cb9-2cd0-53bd-a448-b789845c8b40", 00:14:44.452 "is_configured": true, 00:14:44.452 "data_offset": 2048, 00:14:44.452 "data_size": 63488 00:14:44.452 }, 00:14:44.452 { 00:14:44.452 "name": "BaseBdev4", 00:14:44.452 "uuid": "2d3c1925-67d7-5f70-af21-81460c2f0084", 00:14:44.452 "is_configured": true, 00:14:44.452 "data_offset": 2048, 00:14:44.452 "data_size": 63488 00:14:44.452 } 00:14:44.452 ] 00:14:44.452 }' 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.452 [2024-11-15 10:59:51.294506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.452 "name": "raid_bdev1", 00:14:44.452 "uuid": "95957d52-19ee-4ba7-a6ca-6fb2d3c23c71", 00:14:44.452 "strip_size_kb": 0, 00:14:44.452 "state": "online", 00:14:44.452 "raid_level": "raid1", 00:14:44.452 "superblock": true, 00:14:44.452 "num_base_bdevs": 4, 00:14:44.452 "num_base_bdevs_discovered": 2, 00:14:44.452 "num_base_bdevs_operational": 2, 00:14:44.452 "base_bdevs_list": [ 00:14:44.452 { 00:14:44.452 "name": null, 00:14:44.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.452 "is_configured": false, 00:14:44.452 "data_offset": 0, 00:14:44.452 "data_size": 63488 00:14:44.452 }, 00:14:44.452 { 00:14:44.452 "name": null, 00:14:44.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.452 "is_configured": false, 00:14:44.452 "data_offset": 2048, 00:14:44.452 "data_size": 63488 00:14:44.452 }, 00:14:44.452 { 00:14:44.452 "name": "BaseBdev3", 00:14:44.452 "uuid": "ffe43cb9-2cd0-53bd-a448-b789845c8b40", 00:14:44.452 "is_configured": true, 00:14:44.452 "data_offset": 2048, 00:14:44.452 "data_size": 63488 00:14:44.452 }, 00:14:44.452 { 00:14:44.452 "name": "BaseBdev4", 00:14:44.452 "uuid": "2d3c1925-67d7-5f70-af21-81460c2f0084", 00:14:44.452 "is_configured": true, 00:14:44.452 "data_offset": 2048, 00:14:44.452 "data_size": 63488 00:14:44.452 } 00:14:44.452 ] 00:14:44.452 }' 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.452 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.024 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:45.024 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.024 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.024 [2024-11-15 10:59:51.737901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:45.024 [2024-11-15 10:59:51.738116] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:45.024 [2024-11-15 10:59:51.738145] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:45.024 [2024-11-15 10:59:51.738180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:45.024 [2024-11-15 10:59:51.754311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:14:45.024 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.024 10:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:45.024 [2024-11-15 10:59:51.756322] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:45.977 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.977 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.977 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.977 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.977 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.977 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.977 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.977 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.977 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.977 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.977 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.977 "name": "raid_bdev1", 00:14:45.977 "uuid": "95957d52-19ee-4ba7-a6ca-6fb2d3c23c71", 00:14:45.977 "strip_size_kb": 0, 00:14:45.977 "state": "online", 00:14:45.977 "raid_level": "raid1", 00:14:45.977 "superblock": true, 00:14:45.977 "num_base_bdevs": 4, 00:14:45.977 "num_base_bdevs_discovered": 3, 00:14:45.977 "num_base_bdevs_operational": 3, 00:14:45.977 "process": { 00:14:45.977 "type": "rebuild", 00:14:45.977 "target": "spare", 00:14:45.977 "progress": { 00:14:45.977 "blocks": 20480, 00:14:45.977 "percent": 32 00:14:45.977 } 00:14:45.977 }, 00:14:45.977 "base_bdevs_list": [ 00:14:45.977 { 00:14:45.977 "name": "spare", 00:14:45.977 "uuid": "b6c8fc97-18d1-52f2-b075-130ab7f59433", 00:14:45.977 "is_configured": true, 00:14:45.977 "data_offset": 2048, 00:14:45.977 "data_size": 63488 00:14:45.977 }, 00:14:45.977 { 00:14:45.977 "name": null, 00:14:45.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.977 "is_configured": false, 00:14:45.977 "data_offset": 2048, 00:14:45.977 "data_size": 63488 00:14:45.977 }, 00:14:45.977 { 00:14:45.977 "name": "BaseBdev3", 00:14:45.977 "uuid": "ffe43cb9-2cd0-53bd-a448-b789845c8b40", 00:14:45.977 "is_configured": true, 00:14:45.977 "data_offset": 2048, 00:14:45.977 "data_size": 63488 00:14:45.977 }, 00:14:45.977 { 00:14:45.977 "name": "BaseBdev4", 00:14:45.977 "uuid": "2d3c1925-67d7-5f70-af21-81460c2f0084", 00:14:45.977 "is_configured": true, 00:14:45.977 "data_offset": 2048, 00:14:45.977 "data_size": 63488 00:14:45.977 } 00:14:45.977 ] 00:14:45.977 }' 00:14:45.977 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.977 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:45.977 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.236 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:46.236 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:46.236 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.236 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.236 [2024-11-15 10:59:52.912028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:46.236 [2024-11-15 10:59:52.961635] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:46.236 [2024-11-15 10:59:52.961734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.236 [2024-11-15 10:59:52.961751] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:46.236 [2024-11-15 10:59:52.961760] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:46.236 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.236 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:46.236 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.236 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.236 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.236 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.236 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:46.236 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.236 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.236 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.237 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.237 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.237 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.237 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.237 10:59:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.237 10:59:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.237 10:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.237 "name": "raid_bdev1", 00:14:46.237 "uuid": "95957d52-19ee-4ba7-a6ca-6fb2d3c23c71", 00:14:46.237 "strip_size_kb": 0, 00:14:46.237 "state": "online", 00:14:46.237 "raid_level": "raid1", 00:14:46.237 "superblock": true, 00:14:46.237 "num_base_bdevs": 4, 00:14:46.237 "num_base_bdevs_discovered": 2, 00:14:46.237 "num_base_bdevs_operational": 2, 00:14:46.237 "base_bdevs_list": [ 00:14:46.237 { 00:14:46.237 "name": null, 00:14:46.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.237 "is_configured": false, 00:14:46.237 "data_offset": 0, 00:14:46.237 "data_size": 63488 00:14:46.237 }, 00:14:46.237 { 00:14:46.237 "name": null, 00:14:46.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.237 "is_configured": false, 00:14:46.237 "data_offset": 2048, 00:14:46.237 "data_size": 63488 00:14:46.237 }, 00:14:46.237 { 00:14:46.237 "name": "BaseBdev3", 00:14:46.237 "uuid": "ffe43cb9-2cd0-53bd-a448-b789845c8b40", 00:14:46.237 "is_configured": true, 00:14:46.237 "data_offset": 2048, 00:14:46.237 "data_size": 63488 00:14:46.237 }, 00:14:46.237 { 00:14:46.237 "name": "BaseBdev4", 00:14:46.237 "uuid": "2d3c1925-67d7-5f70-af21-81460c2f0084", 00:14:46.237 "is_configured": true, 00:14:46.237 "data_offset": 2048, 00:14:46.237 "data_size": 63488 00:14:46.237 } 00:14:46.237 ] 00:14:46.237 }' 00:14:46.237 10:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.237 10:59:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.496 10:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:46.496 10:59:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.496 10:59:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.496 [2024-11-15 10:59:53.415025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:46.496 [2024-11-15 10:59:53.415094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:46.496 [2024-11-15 10:59:53.415122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:46.496 [2024-11-15 10:59:53.415136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:46.496 [2024-11-15 10:59:53.415635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:46.496 [2024-11-15 10:59:53.415657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:46.496 [2024-11-15 10:59:53.415755] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:46.496 [2024-11-15 10:59:53.415771] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:46.496 [2024-11-15 10:59:53.415781] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:46.496 [2024-11-15 10:59:53.415802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:46.754 [2024-11-15 10:59:53.431556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:14:46.754 spare 00:14:46.754 10:59:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.754 10:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:46.754 [2024-11-15 10:59:53.433675] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:47.691 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.691 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.691 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.691 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.691 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.691 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.691 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.691 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.691 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.691 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.691 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.691 "name": "raid_bdev1", 00:14:47.691 "uuid": "95957d52-19ee-4ba7-a6ca-6fb2d3c23c71", 00:14:47.691 "strip_size_kb": 0, 00:14:47.691 "state": "online", 00:14:47.691 "raid_level": "raid1", 00:14:47.691 "superblock": true, 00:14:47.691 "num_base_bdevs": 4, 00:14:47.691 "num_base_bdevs_discovered": 3, 00:14:47.691 "num_base_bdevs_operational": 3, 00:14:47.691 "process": { 00:14:47.691 "type": "rebuild", 00:14:47.691 "target": "spare", 00:14:47.691 "progress": { 00:14:47.691 "blocks": 20480, 00:14:47.691 "percent": 32 00:14:47.691 } 00:14:47.691 }, 00:14:47.691 "base_bdevs_list": [ 00:14:47.692 { 00:14:47.692 "name": "spare", 00:14:47.692 "uuid": "b6c8fc97-18d1-52f2-b075-130ab7f59433", 00:14:47.692 "is_configured": true, 00:14:47.692 "data_offset": 2048, 00:14:47.692 "data_size": 63488 00:14:47.692 }, 00:14:47.692 { 00:14:47.692 "name": null, 00:14:47.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.692 "is_configured": false, 00:14:47.692 "data_offset": 2048, 00:14:47.692 "data_size": 63488 00:14:47.692 }, 00:14:47.692 { 00:14:47.692 "name": "BaseBdev3", 00:14:47.692 "uuid": "ffe43cb9-2cd0-53bd-a448-b789845c8b40", 00:14:47.692 "is_configured": true, 00:14:47.692 "data_offset": 2048, 00:14:47.692 "data_size": 63488 00:14:47.692 }, 00:14:47.692 { 00:14:47.692 "name": "BaseBdev4", 00:14:47.692 "uuid": "2d3c1925-67d7-5f70-af21-81460c2f0084", 00:14:47.692 "is_configured": true, 00:14:47.692 "data_offset": 2048, 00:14:47.692 "data_size": 63488 00:14:47.692 } 00:14:47.692 ] 00:14:47.692 }' 00:14:47.692 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.692 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:47.692 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.692 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:47.692 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:47.692 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.692 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.692 [2024-11-15 10:59:54.597171] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:47.963 [2024-11-15 10:59:54.639451] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:47.963 [2024-11-15 10:59:54.639530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.963 [2024-11-15 10:59:54.639549] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:47.963 [2024-11-15 10:59:54.639556] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:47.963 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.963 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:47.963 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.963 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.963 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.963 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.963 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:47.963 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.963 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.963 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.963 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.963 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.963 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.963 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.963 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.963 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.963 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.963 "name": "raid_bdev1", 00:14:47.963 "uuid": "95957d52-19ee-4ba7-a6ca-6fb2d3c23c71", 00:14:47.963 "strip_size_kb": 0, 00:14:47.963 "state": "online", 00:14:47.963 "raid_level": "raid1", 00:14:47.963 "superblock": true, 00:14:47.963 "num_base_bdevs": 4, 00:14:47.963 "num_base_bdevs_discovered": 2, 00:14:47.963 "num_base_bdevs_operational": 2, 00:14:47.963 "base_bdevs_list": [ 00:14:47.963 { 00:14:47.963 "name": null, 00:14:47.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.963 "is_configured": false, 00:14:47.963 "data_offset": 0, 00:14:47.963 "data_size": 63488 00:14:47.963 }, 00:14:47.963 { 00:14:47.963 "name": null, 00:14:47.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.963 "is_configured": false, 00:14:47.963 "data_offset": 2048, 00:14:47.963 "data_size": 63488 00:14:47.963 }, 00:14:47.963 { 00:14:47.963 "name": "BaseBdev3", 00:14:47.963 "uuid": "ffe43cb9-2cd0-53bd-a448-b789845c8b40", 00:14:47.963 "is_configured": true, 00:14:47.963 "data_offset": 2048, 00:14:47.963 "data_size": 63488 00:14:47.963 }, 00:14:47.963 { 00:14:47.963 "name": "BaseBdev4", 00:14:47.963 "uuid": "2d3c1925-67d7-5f70-af21-81460c2f0084", 00:14:47.963 "is_configured": true, 00:14:47.963 "data_offset": 2048, 00:14:47.963 "data_size": 63488 00:14:47.963 } 00:14:47.963 ] 00:14:47.963 }' 00:14:47.963 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.963 10:59:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.222 10:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:48.222 10:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.222 10:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:48.222 10:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:48.222 10:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.222 10:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.222 10:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.222 10:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.222 10:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.482 10:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.482 10:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.482 "name": "raid_bdev1", 00:14:48.482 "uuid": "95957d52-19ee-4ba7-a6ca-6fb2d3c23c71", 00:14:48.482 "strip_size_kb": 0, 00:14:48.482 "state": "online", 00:14:48.482 "raid_level": "raid1", 00:14:48.482 "superblock": true, 00:14:48.482 "num_base_bdevs": 4, 00:14:48.482 "num_base_bdevs_discovered": 2, 00:14:48.482 "num_base_bdevs_operational": 2, 00:14:48.482 "base_bdevs_list": [ 00:14:48.482 { 00:14:48.482 "name": null, 00:14:48.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.482 "is_configured": false, 00:14:48.482 "data_offset": 0, 00:14:48.482 "data_size": 63488 00:14:48.482 }, 00:14:48.482 { 00:14:48.482 "name": null, 00:14:48.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.482 "is_configured": false, 00:14:48.482 "data_offset": 2048, 00:14:48.482 "data_size": 63488 00:14:48.482 }, 00:14:48.482 { 00:14:48.482 "name": "BaseBdev3", 00:14:48.482 "uuid": "ffe43cb9-2cd0-53bd-a448-b789845c8b40", 00:14:48.482 "is_configured": true, 00:14:48.482 "data_offset": 2048, 00:14:48.482 "data_size": 63488 00:14:48.482 }, 00:14:48.482 { 00:14:48.482 "name": "BaseBdev4", 00:14:48.482 "uuid": "2d3c1925-67d7-5f70-af21-81460c2f0084", 00:14:48.482 "is_configured": true, 00:14:48.482 "data_offset": 2048, 00:14:48.482 "data_size": 63488 00:14:48.482 } 00:14:48.482 ] 00:14:48.482 }' 00:14:48.482 10:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.482 10:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:48.482 10:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.482 10:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:48.482 10:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:48.482 10:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.482 10:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.482 10:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.482 10:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:48.482 10:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.482 10:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.482 [2024-11-15 10:59:55.291654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:48.482 [2024-11-15 10:59:55.291708] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.482 [2024-11-15 10:59:55.291733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:48.482 [2024-11-15 10:59:55.291743] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.482 [2024-11-15 10:59:55.292161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.482 [2024-11-15 10:59:55.292178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:48.482 [2024-11-15 10:59:55.292259] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:48.482 [2024-11-15 10:59:55.292273] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:48.482 [2024-11-15 10:59:55.292285] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:48.482 [2024-11-15 10:59:55.292294] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:48.482 BaseBdev1 00:14:48.482 10:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.482 10:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:49.419 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:49.419 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.419 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.419 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.419 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.419 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:49.419 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.419 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.419 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.419 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.419 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.419 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.419 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.420 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.420 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.680 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.680 "name": "raid_bdev1", 00:14:49.680 "uuid": "95957d52-19ee-4ba7-a6ca-6fb2d3c23c71", 00:14:49.680 "strip_size_kb": 0, 00:14:49.680 "state": "online", 00:14:49.680 "raid_level": "raid1", 00:14:49.680 "superblock": true, 00:14:49.680 "num_base_bdevs": 4, 00:14:49.680 "num_base_bdevs_discovered": 2, 00:14:49.680 "num_base_bdevs_operational": 2, 00:14:49.680 "base_bdevs_list": [ 00:14:49.680 { 00:14:49.680 "name": null, 00:14:49.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.680 "is_configured": false, 00:14:49.680 "data_offset": 0, 00:14:49.680 "data_size": 63488 00:14:49.680 }, 00:14:49.680 { 00:14:49.680 "name": null, 00:14:49.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.680 "is_configured": false, 00:14:49.680 "data_offset": 2048, 00:14:49.680 "data_size": 63488 00:14:49.680 }, 00:14:49.680 { 00:14:49.680 "name": "BaseBdev3", 00:14:49.680 "uuid": "ffe43cb9-2cd0-53bd-a448-b789845c8b40", 00:14:49.680 "is_configured": true, 00:14:49.680 "data_offset": 2048, 00:14:49.680 "data_size": 63488 00:14:49.680 }, 00:14:49.680 { 00:14:49.680 "name": "BaseBdev4", 00:14:49.680 "uuid": "2d3c1925-67d7-5f70-af21-81460c2f0084", 00:14:49.680 "is_configured": true, 00:14:49.680 "data_offset": 2048, 00:14:49.680 "data_size": 63488 00:14:49.680 } 00:14:49.680 ] 00:14:49.680 }' 00:14:49.680 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.680 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.939 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:49.939 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.939 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:49.939 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:49.939 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.939 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.939 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.939 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.939 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.939 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.939 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.939 "name": "raid_bdev1", 00:14:49.939 "uuid": "95957d52-19ee-4ba7-a6ca-6fb2d3c23c71", 00:14:49.939 "strip_size_kb": 0, 00:14:49.939 "state": "online", 00:14:49.939 "raid_level": "raid1", 00:14:49.939 "superblock": true, 00:14:49.939 "num_base_bdevs": 4, 00:14:49.939 "num_base_bdevs_discovered": 2, 00:14:49.939 "num_base_bdevs_operational": 2, 00:14:49.939 "base_bdevs_list": [ 00:14:49.939 { 00:14:49.939 "name": null, 00:14:49.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.939 "is_configured": false, 00:14:49.939 "data_offset": 0, 00:14:49.939 "data_size": 63488 00:14:49.939 }, 00:14:49.939 { 00:14:49.939 "name": null, 00:14:49.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.939 "is_configured": false, 00:14:49.939 "data_offset": 2048, 00:14:49.939 "data_size": 63488 00:14:49.939 }, 00:14:49.939 { 00:14:49.939 "name": "BaseBdev3", 00:14:49.939 "uuid": "ffe43cb9-2cd0-53bd-a448-b789845c8b40", 00:14:49.939 "is_configured": true, 00:14:49.940 "data_offset": 2048, 00:14:49.940 "data_size": 63488 00:14:49.940 }, 00:14:49.940 { 00:14:49.940 "name": "BaseBdev4", 00:14:49.940 "uuid": "2d3c1925-67d7-5f70-af21-81460c2f0084", 00:14:49.940 "is_configured": true, 00:14:49.940 "data_offset": 2048, 00:14:49.940 "data_size": 63488 00:14:49.940 } 00:14:49.940 ] 00:14:49.940 }' 00:14:49.940 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.940 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:49.940 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.940 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:49.940 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:49.940 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:14:49.940 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:49.940 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:49.940 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:49.940 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:49.940 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:49.940 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:49.940 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.940 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.199 [2024-11-15 10:59:56.869242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:50.199 [2024-11-15 10:59:56.869425] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:50.199 [2024-11-15 10:59:56.869443] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:50.199 request: 00:14:50.199 { 00:14:50.199 "base_bdev": "BaseBdev1", 00:14:50.199 "raid_bdev": "raid_bdev1", 00:14:50.199 "method": "bdev_raid_add_base_bdev", 00:14:50.199 "req_id": 1 00:14:50.199 } 00:14:50.199 Got JSON-RPC error response 00:14:50.199 response: 00:14:50.199 { 00:14:50.199 "code": -22, 00:14:50.199 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:50.199 } 00:14:50.199 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:50.199 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:14:50.199 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:50.199 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:50.199 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:50.199 10:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:51.213 10:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:51.213 10:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.213 10:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.213 10:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.213 10:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.213 10:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:51.214 10:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.214 10:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.214 10:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.214 10:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.214 10:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.214 10:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.214 10:59:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.214 10:59:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.214 10:59:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.214 10:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.214 "name": "raid_bdev1", 00:14:51.214 "uuid": "95957d52-19ee-4ba7-a6ca-6fb2d3c23c71", 00:14:51.214 "strip_size_kb": 0, 00:14:51.214 "state": "online", 00:14:51.214 "raid_level": "raid1", 00:14:51.214 "superblock": true, 00:14:51.214 "num_base_bdevs": 4, 00:14:51.214 "num_base_bdevs_discovered": 2, 00:14:51.214 "num_base_bdevs_operational": 2, 00:14:51.214 "base_bdevs_list": [ 00:14:51.214 { 00:14:51.214 "name": null, 00:14:51.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.214 "is_configured": false, 00:14:51.214 "data_offset": 0, 00:14:51.214 "data_size": 63488 00:14:51.214 }, 00:14:51.214 { 00:14:51.214 "name": null, 00:14:51.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.214 "is_configured": false, 00:14:51.214 "data_offset": 2048, 00:14:51.214 "data_size": 63488 00:14:51.214 }, 00:14:51.214 { 00:14:51.214 "name": "BaseBdev3", 00:14:51.214 "uuid": "ffe43cb9-2cd0-53bd-a448-b789845c8b40", 00:14:51.214 "is_configured": true, 00:14:51.214 "data_offset": 2048, 00:14:51.214 "data_size": 63488 00:14:51.214 }, 00:14:51.214 { 00:14:51.214 "name": "BaseBdev4", 00:14:51.214 "uuid": "2d3c1925-67d7-5f70-af21-81460c2f0084", 00:14:51.214 "is_configured": true, 00:14:51.214 "data_offset": 2048, 00:14:51.214 "data_size": 63488 00:14:51.214 } 00:14:51.214 ] 00:14:51.214 }' 00:14:51.214 10:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.214 10:59:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.472 10:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:51.472 10:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.472 10:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:51.472 10:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:51.472 10:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.472 10:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.472 10:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.472 10:59:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.472 10:59:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.472 10:59:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.472 10:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.472 "name": "raid_bdev1", 00:14:51.472 "uuid": "95957d52-19ee-4ba7-a6ca-6fb2d3c23c71", 00:14:51.472 "strip_size_kb": 0, 00:14:51.472 "state": "online", 00:14:51.472 "raid_level": "raid1", 00:14:51.472 "superblock": true, 00:14:51.472 "num_base_bdevs": 4, 00:14:51.472 "num_base_bdevs_discovered": 2, 00:14:51.472 "num_base_bdevs_operational": 2, 00:14:51.472 "base_bdevs_list": [ 00:14:51.472 { 00:14:51.472 "name": null, 00:14:51.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.472 "is_configured": false, 00:14:51.472 "data_offset": 0, 00:14:51.472 "data_size": 63488 00:14:51.472 }, 00:14:51.472 { 00:14:51.472 "name": null, 00:14:51.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.472 "is_configured": false, 00:14:51.472 "data_offset": 2048, 00:14:51.472 "data_size": 63488 00:14:51.472 }, 00:14:51.472 { 00:14:51.472 "name": "BaseBdev3", 00:14:51.472 "uuid": "ffe43cb9-2cd0-53bd-a448-b789845c8b40", 00:14:51.472 "is_configured": true, 00:14:51.472 "data_offset": 2048, 00:14:51.472 "data_size": 63488 00:14:51.473 }, 00:14:51.473 { 00:14:51.473 "name": "BaseBdev4", 00:14:51.473 "uuid": "2d3c1925-67d7-5f70-af21-81460c2f0084", 00:14:51.473 "is_configured": true, 00:14:51.473 "data_offset": 2048, 00:14:51.473 "data_size": 63488 00:14:51.473 } 00:14:51.473 ] 00:14:51.473 }' 00:14:51.473 10:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.731 10:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:51.731 10:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.731 10:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:51.731 10:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79332 00:14:51.731 10:59:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 79332 ']' 00:14:51.731 10:59:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 79332 00:14:51.731 10:59:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:14:51.731 10:59:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:51.731 10:59:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79332 00:14:51.731 10:59:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:51.731 10:59:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:51.731 10:59:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79332' 00:14:51.731 killing process with pid 79332 00:14:51.731 10:59:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 79332 00:14:51.731 Received shutdown signal, test time was about 17.954535 seconds 00:14:51.731 00:14:51.731 Latency(us) 00:14:51.731 [2024-11-15T10:59:58.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.731 [2024-11-15T10:59:58.659Z] =================================================================================================================== 00:14:51.731 [2024-11-15T10:59:58.659Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:51.731 [2024-11-15 10:59:58.510580] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:51.731 [2024-11-15 10:59:58.510718] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:51.731 10:59:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 79332 00:14:51.731 [2024-11-15 10:59:58.510793] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:51.731 [2024-11-15 10:59:58.510809] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:52.299 [2024-11-15 10:59:58.932640] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:53.236 11:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:53.236 00:14:53.236 real 0m21.391s 00:14:53.236 user 0m28.023s 00:14:53.236 sys 0m2.543s 00:14:53.236 11:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:53.236 11:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.236 ************************************ 00:14:53.236 END TEST raid_rebuild_test_sb_io 00:14:53.236 ************************************ 00:14:53.236 11:00:00 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:53.236 11:00:00 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:53.236 11:00:00 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:53.236 11:00:00 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:53.236 11:00:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:53.237 ************************************ 00:14:53.237 START TEST raid5f_state_function_test 00:14:53.237 ************************************ 00:14:53.237 11:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 false 00:14:53.237 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:53.237 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:53.237 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:53.237 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:53.237 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:53.237 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:53.237 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:53.237 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:53.237 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:53.237 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:53.237 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:53.237 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:53.237 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:53.237 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:53.237 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:53.496 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:53.496 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:53.496 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:53.496 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:53.496 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:53.496 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:53.496 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:53.496 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:53.496 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:53.496 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:53.496 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:53.496 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80056 00:14:53.496 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:53.496 Process raid pid: 80056 00:14:53.496 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80056' 00:14:53.496 11:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80056 00:14:53.496 11:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 80056 ']' 00:14:53.496 11:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.496 11:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:53.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.496 11:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.496 11:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:53.496 11:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.496 [2024-11-15 11:00:00.249884] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:14:53.496 [2024-11-15 11:00:00.250000] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.496 [2024-11-15 11:00:00.407938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.754 [2024-11-15 11:00:00.524775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.013 [2024-11-15 11:00:00.734540] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.013 [2024-11-15 11:00:00.734601] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.273 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:54.273 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:14:54.273 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:54.273 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.273 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.273 [2024-11-15 11:00:01.086984] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:54.273 [2024-11-15 11:00:01.087036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:54.273 [2024-11-15 11:00:01.087050] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:54.273 [2024-11-15 11:00:01.087060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:54.273 [2024-11-15 11:00:01.087066] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:54.273 [2024-11-15 11:00:01.087074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:54.273 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.273 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:54.273 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.273 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.273 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.273 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.273 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.273 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.273 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.273 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.273 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.273 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.273 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.273 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.273 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.273 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.273 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.273 "name": "Existed_Raid", 00:14:54.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.273 "strip_size_kb": 64, 00:14:54.273 "state": "configuring", 00:14:54.273 "raid_level": "raid5f", 00:14:54.273 "superblock": false, 00:14:54.273 "num_base_bdevs": 3, 00:14:54.273 "num_base_bdevs_discovered": 0, 00:14:54.273 "num_base_bdevs_operational": 3, 00:14:54.273 "base_bdevs_list": [ 00:14:54.273 { 00:14:54.273 "name": "BaseBdev1", 00:14:54.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.273 "is_configured": false, 00:14:54.273 "data_offset": 0, 00:14:54.273 "data_size": 0 00:14:54.273 }, 00:14:54.273 { 00:14:54.273 "name": "BaseBdev2", 00:14:54.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.273 "is_configured": false, 00:14:54.273 "data_offset": 0, 00:14:54.273 "data_size": 0 00:14:54.273 }, 00:14:54.273 { 00:14:54.273 "name": "BaseBdev3", 00:14:54.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.273 "is_configured": false, 00:14:54.273 "data_offset": 0, 00:14:54.273 "data_size": 0 00:14:54.273 } 00:14:54.273 ] 00:14:54.273 }' 00:14:54.274 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.274 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.842 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:54.842 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.842 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.842 [2024-11-15 11:00:01.534227] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:54.842 [2024-11-15 11:00:01.534287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:54.842 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.842 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:54.842 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.842 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.842 [2024-11-15 11:00:01.546168] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:54.842 [2024-11-15 11:00:01.546216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:54.842 [2024-11-15 11:00:01.546225] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:54.842 [2024-11-15 11:00:01.546235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:54.842 [2024-11-15 11:00:01.546241] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:54.842 [2024-11-15 11:00:01.546250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:54.842 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.842 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:54.842 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.842 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.842 [2024-11-15 11:00:01.599261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:54.842 BaseBdev1 00:14:54.842 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.842 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:54.842 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:54.842 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:54.842 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:54.842 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:54.843 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:54.843 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:54.843 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.843 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.843 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.843 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:54.843 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.843 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.843 [ 00:14:54.843 { 00:14:54.843 "name": "BaseBdev1", 00:14:54.843 "aliases": [ 00:14:54.843 "faca41dc-3e04-4a2b-a074-59a40cf6ab4f" 00:14:54.843 ], 00:14:54.843 "product_name": "Malloc disk", 00:14:54.843 "block_size": 512, 00:14:54.843 "num_blocks": 65536, 00:14:54.843 "uuid": "faca41dc-3e04-4a2b-a074-59a40cf6ab4f", 00:14:54.843 "assigned_rate_limits": { 00:14:54.843 "rw_ios_per_sec": 0, 00:14:54.843 "rw_mbytes_per_sec": 0, 00:14:54.843 "r_mbytes_per_sec": 0, 00:14:54.843 "w_mbytes_per_sec": 0 00:14:54.843 }, 00:14:54.843 "claimed": true, 00:14:54.843 "claim_type": "exclusive_write", 00:14:54.843 "zoned": false, 00:14:54.843 "supported_io_types": { 00:14:54.843 "read": true, 00:14:54.843 "write": true, 00:14:54.843 "unmap": true, 00:14:54.843 "flush": true, 00:14:54.843 "reset": true, 00:14:54.843 "nvme_admin": false, 00:14:54.843 "nvme_io": false, 00:14:54.843 "nvme_io_md": false, 00:14:54.843 "write_zeroes": true, 00:14:54.843 "zcopy": true, 00:14:54.843 "get_zone_info": false, 00:14:54.843 "zone_management": false, 00:14:54.843 "zone_append": false, 00:14:54.843 "compare": false, 00:14:54.843 "compare_and_write": false, 00:14:54.843 "abort": true, 00:14:54.843 "seek_hole": false, 00:14:54.843 "seek_data": false, 00:14:54.843 "copy": true, 00:14:54.843 "nvme_iov_md": false 00:14:54.843 }, 00:14:54.843 "memory_domains": [ 00:14:54.843 { 00:14:54.843 "dma_device_id": "system", 00:14:54.843 "dma_device_type": 1 00:14:54.843 }, 00:14:54.843 { 00:14:54.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.843 "dma_device_type": 2 00:14:54.843 } 00:14:54.843 ], 00:14:54.843 "driver_specific": {} 00:14:54.843 } 00:14:54.843 ] 00:14:54.843 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.843 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:54.843 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:54.843 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.843 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.843 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.843 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.843 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.843 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.843 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.843 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.843 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.843 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.843 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.843 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.843 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.843 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.843 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.843 "name": "Existed_Raid", 00:14:54.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.843 "strip_size_kb": 64, 00:14:54.843 "state": "configuring", 00:14:54.843 "raid_level": "raid5f", 00:14:54.843 "superblock": false, 00:14:54.843 "num_base_bdevs": 3, 00:14:54.843 "num_base_bdevs_discovered": 1, 00:14:54.843 "num_base_bdevs_operational": 3, 00:14:54.843 "base_bdevs_list": [ 00:14:54.843 { 00:14:54.843 "name": "BaseBdev1", 00:14:54.843 "uuid": "faca41dc-3e04-4a2b-a074-59a40cf6ab4f", 00:14:54.843 "is_configured": true, 00:14:54.843 "data_offset": 0, 00:14:54.843 "data_size": 65536 00:14:54.843 }, 00:14:54.843 { 00:14:54.843 "name": "BaseBdev2", 00:14:54.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.843 "is_configured": false, 00:14:54.843 "data_offset": 0, 00:14:54.843 "data_size": 0 00:14:54.843 }, 00:14:54.843 { 00:14:54.843 "name": "BaseBdev3", 00:14:54.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.843 "is_configured": false, 00:14:54.843 "data_offset": 0, 00:14:54.843 "data_size": 0 00:14:54.843 } 00:14:54.843 ] 00:14:54.843 }' 00:14:54.843 11:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.843 11:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.412 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:55.412 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.412 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.412 [2024-11-15 11:00:02.070474] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:55.412 [2024-11-15 11:00:02.070539] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:55.412 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.412 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:55.412 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.412 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.412 [2024-11-15 11:00:02.078515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.412 [2024-11-15 11:00:02.080612] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:55.412 [2024-11-15 11:00:02.080656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:55.412 [2024-11-15 11:00:02.080667] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:55.412 [2024-11-15 11:00:02.080676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:55.412 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.412 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:55.412 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:55.412 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:55.412 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.412 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.412 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.412 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.412 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.412 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.412 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.412 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.412 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.412 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.412 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.412 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.412 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.412 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.412 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.412 "name": "Existed_Raid", 00:14:55.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.412 "strip_size_kb": 64, 00:14:55.412 "state": "configuring", 00:14:55.412 "raid_level": "raid5f", 00:14:55.412 "superblock": false, 00:14:55.412 "num_base_bdevs": 3, 00:14:55.412 "num_base_bdevs_discovered": 1, 00:14:55.412 "num_base_bdevs_operational": 3, 00:14:55.412 "base_bdevs_list": [ 00:14:55.412 { 00:14:55.412 "name": "BaseBdev1", 00:14:55.412 "uuid": "faca41dc-3e04-4a2b-a074-59a40cf6ab4f", 00:14:55.412 "is_configured": true, 00:14:55.412 "data_offset": 0, 00:14:55.412 "data_size": 65536 00:14:55.412 }, 00:14:55.412 { 00:14:55.412 "name": "BaseBdev2", 00:14:55.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.412 "is_configured": false, 00:14:55.412 "data_offset": 0, 00:14:55.412 "data_size": 0 00:14:55.412 }, 00:14:55.412 { 00:14:55.412 "name": "BaseBdev3", 00:14:55.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.412 "is_configured": false, 00:14:55.412 "data_offset": 0, 00:14:55.412 "data_size": 0 00:14:55.412 } 00:14:55.412 ] 00:14:55.412 }' 00:14:55.412 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.412 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.672 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:55.672 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.672 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.672 [2024-11-15 11:00:02.594047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:55.672 BaseBdev2 00:14:55.672 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.672 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:55.672 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:55.672 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:55.672 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:55.672 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:55.672 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:55.672 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:55.672 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.672 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.932 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.932 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:55.932 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.932 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.932 [ 00:14:55.932 { 00:14:55.932 "name": "BaseBdev2", 00:14:55.932 "aliases": [ 00:14:55.932 "521d9a83-aa44-488a-a2ce-97b3c5ad9755" 00:14:55.932 ], 00:14:55.932 "product_name": "Malloc disk", 00:14:55.932 "block_size": 512, 00:14:55.932 "num_blocks": 65536, 00:14:55.932 "uuid": "521d9a83-aa44-488a-a2ce-97b3c5ad9755", 00:14:55.932 "assigned_rate_limits": { 00:14:55.932 "rw_ios_per_sec": 0, 00:14:55.932 "rw_mbytes_per_sec": 0, 00:14:55.932 "r_mbytes_per_sec": 0, 00:14:55.932 "w_mbytes_per_sec": 0 00:14:55.932 }, 00:14:55.932 "claimed": true, 00:14:55.932 "claim_type": "exclusive_write", 00:14:55.932 "zoned": false, 00:14:55.932 "supported_io_types": { 00:14:55.932 "read": true, 00:14:55.932 "write": true, 00:14:55.932 "unmap": true, 00:14:55.932 "flush": true, 00:14:55.932 "reset": true, 00:14:55.932 "nvme_admin": false, 00:14:55.932 "nvme_io": false, 00:14:55.932 "nvme_io_md": false, 00:14:55.932 "write_zeroes": true, 00:14:55.932 "zcopy": true, 00:14:55.932 "get_zone_info": false, 00:14:55.932 "zone_management": false, 00:14:55.932 "zone_append": false, 00:14:55.932 "compare": false, 00:14:55.932 "compare_and_write": false, 00:14:55.933 "abort": true, 00:14:55.933 "seek_hole": false, 00:14:55.933 "seek_data": false, 00:14:55.933 "copy": true, 00:14:55.933 "nvme_iov_md": false 00:14:55.933 }, 00:14:55.933 "memory_domains": [ 00:14:55.933 { 00:14:55.933 "dma_device_id": "system", 00:14:55.933 "dma_device_type": 1 00:14:55.933 }, 00:14:55.933 { 00:14:55.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.933 "dma_device_type": 2 00:14:55.933 } 00:14:55.933 ], 00:14:55.933 "driver_specific": {} 00:14:55.933 } 00:14:55.933 ] 00:14:55.933 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.933 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:55.933 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:55.933 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:55.933 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:55.933 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.933 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.933 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.933 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.933 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.933 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.933 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.933 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.933 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.933 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.933 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.933 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.933 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.933 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.933 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.933 "name": "Existed_Raid", 00:14:55.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.933 "strip_size_kb": 64, 00:14:55.933 "state": "configuring", 00:14:55.933 "raid_level": "raid5f", 00:14:55.933 "superblock": false, 00:14:55.933 "num_base_bdevs": 3, 00:14:55.933 "num_base_bdevs_discovered": 2, 00:14:55.933 "num_base_bdevs_operational": 3, 00:14:55.933 "base_bdevs_list": [ 00:14:55.933 { 00:14:55.933 "name": "BaseBdev1", 00:14:55.933 "uuid": "faca41dc-3e04-4a2b-a074-59a40cf6ab4f", 00:14:55.933 "is_configured": true, 00:14:55.933 "data_offset": 0, 00:14:55.933 "data_size": 65536 00:14:55.933 }, 00:14:55.933 { 00:14:55.933 "name": "BaseBdev2", 00:14:55.933 "uuid": "521d9a83-aa44-488a-a2ce-97b3c5ad9755", 00:14:55.933 "is_configured": true, 00:14:55.933 "data_offset": 0, 00:14:55.933 "data_size": 65536 00:14:55.933 }, 00:14:55.933 { 00:14:55.933 "name": "BaseBdev3", 00:14:55.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.933 "is_configured": false, 00:14:55.933 "data_offset": 0, 00:14:55.933 "data_size": 0 00:14:55.933 } 00:14:55.933 ] 00:14:55.933 }' 00:14:55.933 11:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.933 11:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.506 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:56.506 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.507 [2024-11-15 11:00:03.181746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:56.507 [2024-11-15 11:00:03.181912] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:56.507 [2024-11-15 11:00:03.181952] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:56.507 [2024-11-15 11:00:03.182295] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:56.507 [2024-11-15 11:00:03.187894] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:56.507 [2024-11-15 11:00:03.187954] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:56.507 [2024-11-15 11:00:03.188296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.507 BaseBdev3 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.507 [ 00:14:56.507 { 00:14:56.507 "name": "BaseBdev3", 00:14:56.507 "aliases": [ 00:14:56.507 "9607363f-df65-478c-af9e-d4d40cab0055" 00:14:56.507 ], 00:14:56.507 "product_name": "Malloc disk", 00:14:56.507 "block_size": 512, 00:14:56.507 "num_blocks": 65536, 00:14:56.507 "uuid": "9607363f-df65-478c-af9e-d4d40cab0055", 00:14:56.507 "assigned_rate_limits": { 00:14:56.507 "rw_ios_per_sec": 0, 00:14:56.507 "rw_mbytes_per_sec": 0, 00:14:56.507 "r_mbytes_per_sec": 0, 00:14:56.507 "w_mbytes_per_sec": 0 00:14:56.507 }, 00:14:56.507 "claimed": true, 00:14:56.507 "claim_type": "exclusive_write", 00:14:56.507 "zoned": false, 00:14:56.507 "supported_io_types": { 00:14:56.507 "read": true, 00:14:56.507 "write": true, 00:14:56.507 "unmap": true, 00:14:56.507 "flush": true, 00:14:56.507 "reset": true, 00:14:56.507 "nvme_admin": false, 00:14:56.507 "nvme_io": false, 00:14:56.507 "nvme_io_md": false, 00:14:56.507 "write_zeroes": true, 00:14:56.507 "zcopy": true, 00:14:56.507 "get_zone_info": false, 00:14:56.507 "zone_management": false, 00:14:56.507 "zone_append": false, 00:14:56.507 "compare": false, 00:14:56.507 "compare_and_write": false, 00:14:56.507 "abort": true, 00:14:56.507 "seek_hole": false, 00:14:56.507 "seek_data": false, 00:14:56.507 "copy": true, 00:14:56.507 "nvme_iov_md": false 00:14:56.507 }, 00:14:56.507 "memory_domains": [ 00:14:56.507 { 00:14:56.507 "dma_device_id": "system", 00:14:56.507 "dma_device_type": 1 00:14:56.507 }, 00:14:56.507 { 00:14:56.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.507 "dma_device_type": 2 00:14:56.507 } 00:14:56.507 ], 00:14:56.507 "driver_specific": {} 00:14:56.507 } 00:14:56.507 ] 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.507 "name": "Existed_Raid", 00:14:56.507 "uuid": "95ece07f-4a9a-4e53-82fe-cb76400b40e7", 00:14:56.507 "strip_size_kb": 64, 00:14:56.507 "state": "online", 00:14:56.507 "raid_level": "raid5f", 00:14:56.507 "superblock": false, 00:14:56.507 "num_base_bdevs": 3, 00:14:56.507 "num_base_bdevs_discovered": 3, 00:14:56.507 "num_base_bdevs_operational": 3, 00:14:56.507 "base_bdevs_list": [ 00:14:56.507 { 00:14:56.507 "name": "BaseBdev1", 00:14:56.507 "uuid": "faca41dc-3e04-4a2b-a074-59a40cf6ab4f", 00:14:56.507 "is_configured": true, 00:14:56.507 "data_offset": 0, 00:14:56.507 "data_size": 65536 00:14:56.507 }, 00:14:56.507 { 00:14:56.507 "name": "BaseBdev2", 00:14:56.507 "uuid": "521d9a83-aa44-488a-a2ce-97b3c5ad9755", 00:14:56.507 "is_configured": true, 00:14:56.507 "data_offset": 0, 00:14:56.507 "data_size": 65536 00:14:56.507 }, 00:14:56.507 { 00:14:56.507 "name": "BaseBdev3", 00:14:56.507 "uuid": "9607363f-df65-478c-af9e-d4d40cab0055", 00:14:56.507 "is_configured": true, 00:14:56.507 "data_offset": 0, 00:14:56.507 "data_size": 65536 00:14:56.507 } 00:14:56.507 ] 00:14:56.507 }' 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.507 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.781 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:56.781 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:56.781 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:56.781 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:56.781 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:56.781 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:56.781 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:56.781 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:56.781 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.781 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.042 [2024-11-15 11:00:03.706597] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.042 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.042 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:57.042 "name": "Existed_Raid", 00:14:57.042 "aliases": [ 00:14:57.043 "95ece07f-4a9a-4e53-82fe-cb76400b40e7" 00:14:57.043 ], 00:14:57.043 "product_name": "Raid Volume", 00:14:57.043 "block_size": 512, 00:14:57.043 "num_blocks": 131072, 00:14:57.043 "uuid": "95ece07f-4a9a-4e53-82fe-cb76400b40e7", 00:14:57.043 "assigned_rate_limits": { 00:14:57.043 "rw_ios_per_sec": 0, 00:14:57.043 "rw_mbytes_per_sec": 0, 00:14:57.043 "r_mbytes_per_sec": 0, 00:14:57.043 "w_mbytes_per_sec": 0 00:14:57.043 }, 00:14:57.043 "claimed": false, 00:14:57.043 "zoned": false, 00:14:57.043 "supported_io_types": { 00:14:57.043 "read": true, 00:14:57.043 "write": true, 00:14:57.043 "unmap": false, 00:14:57.043 "flush": false, 00:14:57.043 "reset": true, 00:14:57.043 "nvme_admin": false, 00:14:57.043 "nvme_io": false, 00:14:57.043 "nvme_io_md": false, 00:14:57.043 "write_zeroes": true, 00:14:57.043 "zcopy": false, 00:14:57.043 "get_zone_info": false, 00:14:57.043 "zone_management": false, 00:14:57.043 "zone_append": false, 00:14:57.043 "compare": false, 00:14:57.043 "compare_and_write": false, 00:14:57.043 "abort": false, 00:14:57.043 "seek_hole": false, 00:14:57.043 "seek_data": false, 00:14:57.043 "copy": false, 00:14:57.043 "nvme_iov_md": false 00:14:57.043 }, 00:14:57.043 "driver_specific": { 00:14:57.043 "raid": { 00:14:57.043 "uuid": "95ece07f-4a9a-4e53-82fe-cb76400b40e7", 00:14:57.043 "strip_size_kb": 64, 00:14:57.043 "state": "online", 00:14:57.043 "raid_level": "raid5f", 00:14:57.043 "superblock": false, 00:14:57.043 "num_base_bdevs": 3, 00:14:57.043 "num_base_bdevs_discovered": 3, 00:14:57.043 "num_base_bdevs_operational": 3, 00:14:57.043 "base_bdevs_list": [ 00:14:57.043 { 00:14:57.043 "name": "BaseBdev1", 00:14:57.043 "uuid": "faca41dc-3e04-4a2b-a074-59a40cf6ab4f", 00:14:57.043 "is_configured": true, 00:14:57.043 "data_offset": 0, 00:14:57.043 "data_size": 65536 00:14:57.043 }, 00:14:57.043 { 00:14:57.043 "name": "BaseBdev2", 00:14:57.043 "uuid": "521d9a83-aa44-488a-a2ce-97b3c5ad9755", 00:14:57.043 "is_configured": true, 00:14:57.043 "data_offset": 0, 00:14:57.043 "data_size": 65536 00:14:57.043 }, 00:14:57.043 { 00:14:57.043 "name": "BaseBdev3", 00:14:57.043 "uuid": "9607363f-df65-478c-af9e-d4d40cab0055", 00:14:57.043 "is_configured": true, 00:14:57.043 "data_offset": 0, 00:14:57.043 "data_size": 65536 00:14:57.043 } 00:14:57.043 ] 00:14:57.043 } 00:14:57.043 } 00:14:57.043 }' 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:57.043 BaseBdev2 00:14:57.043 BaseBdev3' 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.043 11:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.302 [2024-11-15 11:00:03.973978] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:57.302 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.302 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:57.302 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:57.302 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:57.302 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:57.302 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:57.302 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:57.302 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.302 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.302 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.302 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.302 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:57.302 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.302 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.302 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.302 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.302 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.302 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.302 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.302 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.302 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.302 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.302 "name": "Existed_Raid", 00:14:57.302 "uuid": "95ece07f-4a9a-4e53-82fe-cb76400b40e7", 00:14:57.302 "strip_size_kb": 64, 00:14:57.302 "state": "online", 00:14:57.302 "raid_level": "raid5f", 00:14:57.302 "superblock": false, 00:14:57.302 "num_base_bdevs": 3, 00:14:57.302 "num_base_bdevs_discovered": 2, 00:14:57.302 "num_base_bdevs_operational": 2, 00:14:57.302 "base_bdevs_list": [ 00:14:57.302 { 00:14:57.302 "name": null, 00:14:57.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.302 "is_configured": false, 00:14:57.302 "data_offset": 0, 00:14:57.302 "data_size": 65536 00:14:57.302 }, 00:14:57.302 { 00:14:57.302 "name": "BaseBdev2", 00:14:57.302 "uuid": "521d9a83-aa44-488a-a2ce-97b3c5ad9755", 00:14:57.302 "is_configured": true, 00:14:57.302 "data_offset": 0, 00:14:57.302 "data_size": 65536 00:14:57.302 }, 00:14:57.302 { 00:14:57.302 "name": "BaseBdev3", 00:14:57.302 "uuid": "9607363f-df65-478c-af9e-d4d40cab0055", 00:14:57.302 "is_configured": true, 00:14:57.302 "data_offset": 0, 00:14:57.302 "data_size": 65536 00:14:57.302 } 00:14:57.302 ] 00:14:57.302 }' 00:14:57.302 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.302 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.872 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:57.872 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:57.872 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:57.872 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.872 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.872 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.872 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.872 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:57.872 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:57.872 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:57.872 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.872 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.872 [2024-11-15 11:00:04.584329] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:57.872 [2024-11-15 11:00:04.584558] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:57.872 [2024-11-15 11:00:04.688522] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.872 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.872 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:57.872 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:57.873 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.873 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:57.873 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.873 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.873 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.873 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:57.873 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:57.873 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:57.873 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.873 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.873 [2024-11-15 11:00:04.748449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:57.873 [2024-11-15 11:00:04.748562] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:58.132 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.132 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:58.132 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:58.132 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.132 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:58.132 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.132 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.132 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.132 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:58.132 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:58.132 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:58.132 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:58.132 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:58.133 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:58.133 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.133 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.133 BaseBdev2 00:14:58.133 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.133 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:58.133 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:58.133 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:58.133 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:58.133 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:58.133 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:58.133 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:58.133 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.133 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.133 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.133 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:58.133 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.133 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.133 [ 00:14:58.133 { 00:14:58.133 "name": "BaseBdev2", 00:14:58.133 "aliases": [ 00:14:58.133 "c8f09aed-9db8-4844-af84-f4cdf59604e8" 00:14:58.133 ], 00:14:58.133 "product_name": "Malloc disk", 00:14:58.133 "block_size": 512, 00:14:58.133 "num_blocks": 65536, 00:14:58.133 "uuid": "c8f09aed-9db8-4844-af84-f4cdf59604e8", 00:14:58.133 "assigned_rate_limits": { 00:14:58.133 "rw_ios_per_sec": 0, 00:14:58.133 "rw_mbytes_per_sec": 0, 00:14:58.133 "r_mbytes_per_sec": 0, 00:14:58.133 "w_mbytes_per_sec": 0 00:14:58.133 }, 00:14:58.133 "claimed": false, 00:14:58.133 "zoned": false, 00:14:58.133 "supported_io_types": { 00:14:58.133 "read": true, 00:14:58.133 "write": true, 00:14:58.133 "unmap": true, 00:14:58.133 "flush": true, 00:14:58.133 "reset": true, 00:14:58.133 "nvme_admin": false, 00:14:58.133 "nvme_io": false, 00:14:58.133 "nvme_io_md": false, 00:14:58.133 "write_zeroes": true, 00:14:58.133 "zcopy": true, 00:14:58.133 "get_zone_info": false, 00:14:58.133 "zone_management": false, 00:14:58.133 "zone_append": false, 00:14:58.133 "compare": false, 00:14:58.133 "compare_and_write": false, 00:14:58.133 "abort": true, 00:14:58.133 "seek_hole": false, 00:14:58.133 "seek_data": false, 00:14:58.133 "copy": true, 00:14:58.133 "nvme_iov_md": false 00:14:58.133 }, 00:14:58.133 "memory_domains": [ 00:14:58.133 { 00:14:58.133 "dma_device_id": "system", 00:14:58.133 "dma_device_type": 1 00:14:58.133 }, 00:14:58.133 { 00:14:58.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.133 "dma_device_type": 2 00:14:58.133 } 00:14:58.133 ], 00:14:58.133 "driver_specific": {} 00:14:58.133 } 00:14:58.133 ] 00:14:58.133 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.133 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:58.133 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:58.133 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:58.133 11:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:58.133 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.133 11:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.133 BaseBdev3 00:14:58.133 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.133 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:58.133 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:14:58.133 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:58.133 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:58.133 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:58.133 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:58.133 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:58.133 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.133 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.133 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.133 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:58.133 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.133 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.393 [ 00:14:58.393 { 00:14:58.393 "name": "BaseBdev3", 00:14:58.393 "aliases": [ 00:14:58.393 "7f231335-5494-430a-a8a9-00ed8769342d" 00:14:58.393 ], 00:14:58.393 "product_name": "Malloc disk", 00:14:58.393 "block_size": 512, 00:14:58.393 "num_blocks": 65536, 00:14:58.393 "uuid": "7f231335-5494-430a-a8a9-00ed8769342d", 00:14:58.393 "assigned_rate_limits": { 00:14:58.393 "rw_ios_per_sec": 0, 00:14:58.393 "rw_mbytes_per_sec": 0, 00:14:58.393 "r_mbytes_per_sec": 0, 00:14:58.393 "w_mbytes_per_sec": 0 00:14:58.393 }, 00:14:58.393 "claimed": false, 00:14:58.393 "zoned": false, 00:14:58.393 "supported_io_types": { 00:14:58.393 "read": true, 00:14:58.393 "write": true, 00:14:58.393 "unmap": true, 00:14:58.393 "flush": true, 00:14:58.393 "reset": true, 00:14:58.393 "nvme_admin": false, 00:14:58.393 "nvme_io": false, 00:14:58.393 "nvme_io_md": false, 00:14:58.393 "write_zeroes": true, 00:14:58.393 "zcopy": true, 00:14:58.393 "get_zone_info": false, 00:14:58.393 "zone_management": false, 00:14:58.393 "zone_append": false, 00:14:58.393 "compare": false, 00:14:58.393 "compare_and_write": false, 00:14:58.393 "abort": true, 00:14:58.393 "seek_hole": false, 00:14:58.393 "seek_data": false, 00:14:58.393 "copy": true, 00:14:58.393 "nvme_iov_md": false 00:14:58.393 }, 00:14:58.393 "memory_domains": [ 00:14:58.393 { 00:14:58.393 "dma_device_id": "system", 00:14:58.393 "dma_device_type": 1 00:14:58.393 }, 00:14:58.393 { 00:14:58.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.393 "dma_device_type": 2 00:14:58.393 } 00:14:58.393 ], 00:14:58.393 "driver_specific": {} 00:14:58.393 } 00:14:58.393 ] 00:14:58.393 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.393 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:58.393 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:58.393 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:58.393 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:58.393 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.393 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.393 [2024-11-15 11:00:05.078099] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:58.393 [2024-11-15 11:00:05.078245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:58.393 [2024-11-15 11:00:05.078293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:58.393 [2024-11-15 11:00:05.080668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:58.393 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.393 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:58.393 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.393 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.393 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.393 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.394 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.394 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.394 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.394 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.394 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.394 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.394 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.394 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.394 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.394 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.394 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.394 "name": "Existed_Raid", 00:14:58.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.394 "strip_size_kb": 64, 00:14:58.394 "state": "configuring", 00:14:58.394 "raid_level": "raid5f", 00:14:58.394 "superblock": false, 00:14:58.394 "num_base_bdevs": 3, 00:14:58.394 "num_base_bdevs_discovered": 2, 00:14:58.394 "num_base_bdevs_operational": 3, 00:14:58.394 "base_bdevs_list": [ 00:14:58.394 { 00:14:58.394 "name": "BaseBdev1", 00:14:58.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.394 "is_configured": false, 00:14:58.394 "data_offset": 0, 00:14:58.394 "data_size": 0 00:14:58.394 }, 00:14:58.394 { 00:14:58.394 "name": "BaseBdev2", 00:14:58.394 "uuid": "c8f09aed-9db8-4844-af84-f4cdf59604e8", 00:14:58.394 "is_configured": true, 00:14:58.394 "data_offset": 0, 00:14:58.394 "data_size": 65536 00:14:58.394 }, 00:14:58.394 { 00:14:58.394 "name": "BaseBdev3", 00:14:58.394 "uuid": "7f231335-5494-430a-a8a9-00ed8769342d", 00:14:58.394 "is_configured": true, 00:14:58.394 "data_offset": 0, 00:14:58.394 "data_size": 65536 00:14:58.394 } 00:14:58.394 ] 00:14:58.394 }' 00:14:58.394 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.394 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.654 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:58.654 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.654 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.654 [2024-11-15 11:00:05.465445] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:58.654 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.654 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:58.654 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.654 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.654 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.654 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.654 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.654 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.654 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.654 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.654 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.654 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.654 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.654 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.654 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.654 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.654 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.654 "name": "Existed_Raid", 00:14:58.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.654 "strip_size_kb": 64, 00:14:58.654 "state": "configuring", 00:14:58.654 "raid_level": "raid5f", 00:14:58.654 "superblock": false, 00:14:58.654 "num_base_bdevs": 3, 00:14:58.654 "num_base_bdevs_discovered": 1, 00:14:58.654 "num_base_bdevs_operational": 3, 00:14:58.654 "base_bdevs_list": [ 00:14:58.654 { 00:14:58.654 "name": "BaseBdev1", 00:14:58.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.654 "is_configured": false, 00:14:58.654 "data_offset": 0, 00:14:58.654 "data_size": 0 00:14:58.654 }, 00:14:58.654 { 00:14:58.654 "name": null, 00:14:58.654 "uuid": "c8f09aed-9db8-4844-af84-f4cdf59604e8", 00:14:58.654 "is_configured": false, 00:14:58.654 "data_offset": 0, 00:14:58.654 "data_size": 65536 00:14:58.654 }, 00:14:58.654 { 00:14:58.654 "name": "BaseBdev3", 00:14:58.654 "uuid": "7f231335-5494-430a-a8a9-00ed8769342d", 00:14:58.654 "is_configured": true, 00:14:58.654 "data_offset": 0, 00:14:58.654 "data_size": 65536 00:14:58.654 } 00:14:58.654 ] 00:14:58.654 }' 00:14:58.654 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.654 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.224 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:59.224 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.224 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.224 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.224 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.224 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:59.224 11:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:59.224 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.224 11:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.224 [2024-11-15 11:00:06.000518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:59.224 BaseBdev1 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.224 [ 00:14:59.224 { 00:14:59.224 "name": "BaseBdev1", 00:14:59.224 "aliases": [ 00:14:59.224 "40e05b1a-6acc-445a-8ef3-6f52dd448bce" 00:14:59.224 ], 00:14:59.224 "product_name": "Malloc disk", 00:14:59.224 "block_size": 512, 00:14:59.224 "num_blocks": 65536, 00:14:59.224 "uuid": "40e05b1a-6acc-445a-8ef3-6f52dd448bce", 00:14:59.224 "assigned_rate_limits": { 00:14:59.224 "rw_ios_per_sec": 0, 00:14:59.224 "rw_mbytes_per_sec": 0, 00:14:59.224 "r_mbytes_per_sec": 0, 00:14:59.224 "w_mbytes_per_sec": 0 00:14:59.224 }, 00:14:59.224 "claimed": true, 00:14:59.224 "claim_type": "exclusive_write", 00:14:59.224 "zoned": false, 00:14:59.224 "supported_io_types": { 00:14:59.224 "read": true, 00:14:59.224 "write": true, 00:14:59.224 "unmap": true, 00:14:59.224 "flush": true, 00:14:59.224 "reset": true, 00:14:59.224 "nvme_admin": false, 00:14:59.224 "nvme_io": false, 00:14:59.224 "nvme_io_md": false, 00:14:59.224 "write_zeroes": true, 00:14:59.224 "zcopy": true, 00:14:59.224 "get_zone_info": false, 00:14:59.224 "zone_management": false, 00:14:59.224 "zone_append": false, 00:14:59.224 "compare": false, 00:14:59.224 "compare_and_write": false, 00:14:59.224 "abort": true, 00:14:59.224 "seek_hole": false, 00:14:59.224 "seek_data": false, 00:14:59.224 "copy": true, 00:14:59.224 "nvme_iov_md": false 00:14:59.224 }, 00:14:59.224 "memory_domains": [ 00:14:59.224 { 00:14:59.224 "dma_device_id": "system", 00:14:59.224 "dma_device_type": 1 00:14:59.224 }, 00:14:59.224 { 00:14:59.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.224 "dma_device_type": 2 00:14:59.224 } 00:14:59.224 ], 00:14:59.224 "driver_specific": {} 00:14:59.224 } 00:14:59.224 ] 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.224 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.225 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.225 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.225 "name": "Existed_Raid", 00:14:59.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.225 "strip_size_kb": 64, 00:14:59.225 "state": "configuring", 00:14:59.225 "raid_level": "raid5f", 00:14:59.225 "superblock": false, 00:14:59.225 "num_base_bdevs": 3, 00:14:59.225 "num_base_bdevs_discovered": 2, 00:14:59.225 "num_base_bdevs_operational": 3, 00:14:59.225 "base_bdevs_list": [ 00:14:59.225 { 00:14:59.225 "name": "BaseBdev1", 00:14:59.225 "uuid": "40e05b1a-6acc-445a-8ef3-6f52dd448bce", 00:14:59.225 "is_configured": true, 00:14:59.225 "data_offset": 0, 00:14:59.225 "data_size": 65536 00:14:59.225 }, 00:14:59.225 { 00:14:59.225 "name": null, 00:14:59.225 "uuid": "c8f09aed-9db8-4844-af84-f4cdf59604e8", 00:14:59.225 "is_configured": false, 00:14:59.225 "data_offset": 0, 00:14:59.225 "data_size": 65536 00:14:59.225 }, 00:14:59.225 { 00:14:59.225 "name": "BaseBdev3", 00:14:59.225 "uuid": "7f231335-5494-430a-a8a9-00ed8769342d", 00:14:59.225 "is_configured": true, 00:14:59.225 "data_offset": 0, 00:14:59.225 "data_size": 65536 00:14:59.225 } 00:14:59.225 ] 00:14:59.225 }' 00:14:59.225 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.225 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.794 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.794 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.794 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.794 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:59.794 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.794 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:59.794 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:59.794 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.794 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.794 [2024-11-15 11:00:06.519694] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:59.794 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.794 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:59.794 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.794 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.794 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.794 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.794 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.794 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.794 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.794 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.794 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.794 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.794 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.794 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.794 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.794 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.794 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.794 "name": "Existed_Raid", 00:14:59.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.794 "strip_size_kb": 64, 00:14:59.794 "state": "configuring", 00:14:59.794 "raid_level": "raid5f", 00:14:59.794 "superblock": false, 00:14:59.794 "num_base_bdevs": 3, 00:14:59.794 "num_base_bdevs_discovered": 1, 00:14:59.794 "num_base_bdevs_operational": 3, 00:14:59.794 "base_bdevs_list": [ 00:14:59.794 { 00:14:59.794 "name": "BaseBdev1", 00:14:59.794 "uuid": "40e05b1a-6acc-445a-8ef3-6f52dd448bce", 00:14:59.794 "is_configured": true, 00:14:59.794 "data_offset": 0, 00:14:59.794 "data_size": 65536 00:14:59.794 }, 00:14:59.794 { 00:14:59.794 "name": null, 00:14:59.794 "uuid": "c8f09aed-9db8-4844-af84-f4cdf59604e8", 00:14:59.794 "is_configured": false, 00:14:59.794 "data_offset": 0, 00:14:59.794 "data_size": 65536 00:14:59.794 }, 00:14:59.794 { 00:14:59.794 "name": null, 00:14:59.794 "uuid": "7f231335-5494-430a-a8a9-00ed8769342d", 00:14:59.794 "is_configured": false, 00:14:59.794 "data_offset": 0, 00:14:59.794 "data_size": 65536 00:14:59.794 } 00:14:59.794 ] 00:14:59.794 }' 00:14:59.794 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.794 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.053 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.053 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.053 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.053 11:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:00.325 11:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.325 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:00.325 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:00.325 11:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.325 11:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.325 [2024-11-15 11:00:07.026862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:00.325 11:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.325 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:00.325 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.325 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.325 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.325 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.325 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.325 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.325 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.325 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.325 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.325 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.325 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.325 11:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.325 11:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.325 11:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.325 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.325 "name": "Existed_Raid", 00:15:00.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.325 "strip_size_kb": 64, 00:15:00.325 "state": "configuring", 00:15:00.325 "raid_level": "raid5f", 00:15:00.325 "superblock": false, 00:15:00.325 "num_base_bdevs": 3, 00:15:00.325 "num_base_bdevs_discovered": 2, 00:15:00.325 "num_base_bdevs_operational": 3, 00:15:00.325 "base_bdevs_list": [ 00:15:00.325 { 00:15:00.325 "name": "BaseBdev1", 00:15:00.325 "uuid": "40e05b1a-6acc-445a-8ef3-6f52dd448bce", 00:15:00.325 "is_configured": true, 00:15:00.326 "data_offset": 0, 00:15:00.326 "data_size": 65536 00:15:00.326 }, 00:15:00.326 { 00:15:00.326 "name": null, 00:15:00.326 "uuid": "c8f09aed-9db8-4844-af84-f4cdf59604e8", 00:15:00.326 "is_configured": false, 00:15:00.326 "data_offset": 0, 00:15:00.326 "data_size": 65536 00:15:00.326 }, 00:15:00.326 { 00:15:00.326 "name": "BaseBdev3", 00:15:00.326 "uuid": "7f231335-5494-430a-a8a9-00ed8769342d", 00:15:00.326 "is_configured": true, 00:15:00.326 "data_offset": 0, 00:15:00.326 "data_size": 65536 00:15:00.326 } 00:15:00.326 ] 00:15:00.326 }' 00:15:00.326 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.326 11:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.585 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.585 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:00.585 11:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.585 11:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.845 11:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.845 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:00.845 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:00.845 11:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.845 11:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.845 [2024-11-15 11:00:07.549996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:00.845 11:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.845 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:00.845 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.845 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.845 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.845 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.845 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.845 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.845 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.845 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.845 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.845 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.845 11:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.845 11:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.845 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.845 11:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.845 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.845 "name": "Existed_Raid", 00:15:00.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.845 "strip_size_kb": 64, 00:15:00.845 "state": "configuring", 00:15:00.845 "raid_level": "raid5f", 00:15:00.845 "superblock": false, 00:15:00.845 "num_base_bdevs": 3, 00:15:00.845 "num_base_bdevs_discovered": 1, 00:15:00.845 "num_base_bdevs_operational": 3, 00:15:00.845 "base_bdevs_list": [ 00:15:00.845 { 00:15:00.845 "name": null, 00:15:00.845 "uuid": "40e05b1a-6acc-445a-8ef3-6f52dd448bce", 00:15:00.845 "is_configured": false, 00:15:00.845 "data_offset": 0, 00:15:00.845 "data_size": 65536 00:15:00.845 }, 00:15:00.845 { 00:15:00.845 "name": null, 00:15:00.845 "uuid": "c8f09aed-9db8-4844-af84-f4cdf59604e8", 00:15:00.845 "is_configured": false, 00:15:00.845 "data_offset": 0, 00:15:00.845 "data_size": 65536 00:15:00.845 }, 00:15:00.845 { 00:15:00.845 "name": "BaseBdev3", 00:15:00.845 "uuid": "7f231335-5494-430a-a8a9-00ed8769342d", 00:15:00.845 "is_configured": true, 00:15:00.845 "data_offset": 0, 00:15:00.845 "data_size": 65536 00:15:00.845 } 00:15:00.845 ] 00:15:00.845 }' 00:15:00.845 11:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.845 11:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.413 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.413 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:01.413 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.413 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.413 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.413 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:01.413 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:01.413 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.413 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.413 [2024-11-15 11:00:08.124128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:01.413 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.413 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:01.413 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.413 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.413 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.413 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.413 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.413 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.413 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.413 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.413 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.413 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.413 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.413 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.413 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.413 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.413 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.413 "name": "Existed_Raid", 00:15:01.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.413 "strip_size_kb": 64, 00:15:01.413 "state": "configuring", 00:15:01.413 "raid_level": "raid5f", 00:15:01.413 "superblock": false, 00:15:01.413 "num_base_bdevs": 3, 00:15:01.413 "num_base_bdevs_discovered": 2, 00:15:01.413 "num_base_bdevs_operational": 3, 00:15:01.413 "base_bdevs_list": [ 00:15:01.413 { 00:15:01.413 "name": null, 00:15:01.413 "uuid": "40e05b1a-6acc-445a-8ef3-6f52dd448bce", 00:15:01.413 "is_configured": false, 00:15:01.414 "data_offset": 0, 00:15:01.414 "data_size": 65536 00:15:01.414 }, 00:15:01.414 { 00:15:01.414 "name": "BaseBdev2", 00:15:01.414 "uuid": "c8f09aed-9db8-4844-af84-f4cdf59604e8", 00:15:01.414 "is_configured": true, 00:15:01.414 "data_offset": 0, 00:15:01.414 "data_size": 65536 00:15:01.414 }, 00:15:01.414 { 00:15:01.414 "name": "BaseBdev3", 00:15:01.414 "uuid": "7f231335-5494-430a-a8a9-00ed8769342d", 00:15:01.414 "is_configured": true, 00:15:01.414 "data_offset": 0, 00:15:01.414 "data_size": 65536 00:15:01.414 } 00:15:01.414 ] 00:15:01.414 }' 00:15:01.414 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.414 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.673 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.673 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.673 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.673 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:01.673 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.932 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:01.932 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:01.932 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.932 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.932 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.932 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.932 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 40e05b1a-6acc-445a-8ef3-6f52dd448bce 00:15:01.932 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.932 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.932 [2024-11-15 11:00:08.675707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:01.932 [2024-11-15 11:00:08.675827] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:01.933 [2024-11-15 11:00:08.675842] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:01.933 [2024-11-15 11:00:08.676106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:01.933 [2024-11-15 11:00:08.681521] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:01.933 [2024-11-15 11:00:08.681541] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:01.933 [2024-11-15 11:00:08.681819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.933 NewBaseBdev 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.933 [ 00:15:01.933 { 00:15:01.933 "name": "NewBaseBdev", 00:15:01.933 "aliases": [ 00:15:01.933 "40e05b1a-6acc-445a-8ef3-6f52dd448bce" 00:15:01.933 ], 00:15:01.933 "product_name": "Malloc disk", 00:15:01.933 "block_size": 512, 00:15:01.933 "num_blocks": 65536, 00:15:01.933 "uuid": "40e05b1a-6acc-445a-8ef3-6f52dd448bce", 00:15:01.933 "assigned_rate_limits": { 00:15:01.933 "rw_ios_per_sec": 0, 00:15:01.933 "rw_mbytes_per_sec": 0, 00:15:01.933 "r_mbytes_per_sec": 0, 00:15:01.933 "w_mbytes_per_sec": 0 00:15:01.933 }, 00:15:01.933 "claimed": true, 00:15:01.933 "claim_type": "exclusive_write", 00:15:01.933 "zoned": false, 00:15:01.933 "supported_io_types": { 00:15:01.933 "read": true, 00:15:01.933 "write": true, 00:15:01.933 "unmap": true, 00:15:01.933 "flush": true, 00:15:01.933 "reset": true, 00:15:01.933 "nvme_admin": false, 00:15:01.933 "nvme_io": false, 00:15:01.933 "nvme_io_md": false, 00:15:01.933 "write_zeroes": true, 00:15:01.933 "zcopy": true, 00:15:01.933 "get_zone_info": false, 00:15:01.933 "zone_management": false, 00:15:01.933 "zone_append": false, 00:15:01.933 "compare": false, 00:15:01.933 "compare_and_write": false, 00:15:01.933 "abort": true, 00:15:01.933 "seek_hole": false, 00:15:01.933 "seek_data": false, 00:15:01.933 "copy": true, 00:15:01.933 "nvme_iov_md": false 00:15:01.933 }, 00:15:01.933 "memory_domains": [ 00:15:01.933 { 00:15:01.933 "dma_device_id": "system", 00:15:01.933 "dma_device_type": 1 00:15:01.933 }, 00:15:01.933 { 00:15:01.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.933 "dma_device_type": 2 00:15:01.933 } 00:15:01.933 ], 00:15:01.933 "driver_specific": {} 00:15:01.933 } 00:15:01.933 ] 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.933 "name": "Existed_Raid", 00:15:01.933 "uuid": "95c34a9b-fa12-4883-b0fd-d08569656ba5", 00:15:01.933 "strip_size_kb": 64, 00:15:01.933 "state": "online", 00:15:01.933 "raid_level": "raid5f", 00:15:01.933 "superblock": false, 00:15:01.933 "num_base_bdevs": 3, 00:15:01.933 "num_base_bdevs_discovered": 3, 00:15:01.933 "num_base_bdevs_operational": 3, 00:15:01.933 "base_bdevs_list": [ 00:15:01.933 { 00:15:01.933 "name": "NewBaseBdev", 00:15:01.933 "uuid": "40e05b1a-6acc-445a-8ef3-6f52dd448bce", 00:15:01.933 "is_configured": true, 00:15:01.933 "data_offset": 0, 00:15:01.933 "data_size": 65536 00:15:01.933 }, 00:15:01.933 { 00:15:01.933 "name": "BaseBdev2", 00:15:01.933 "uuid": "c8f09aed-9db8-4844-af84-f4cdf59604e8", 00:15:01.933 "is_configured": true, 00:15:01.933 "data_offset": 0, 00:15:01.933 "data_size": 65536 00:15:01.933 }, 00:15:01.933 { 00:15:01.933 "name": "BaseBdev3", 00:15:01.933 "uuid": "7f231335-5494-430a-a8a9-00ed8769342d", 00:15:01.933 "is_configured": true, 00:15:01.933 "data_offset": 0, 00:15:01.933 "data_size": 65536 00:15:01.933 } 00:15:01.933 ] 00:15:01.933 }' 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.933 11:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.502 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:02.502 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:02.502 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:02.502 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:02.502 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:02.502 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:02.502 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:02.502 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:02.502 11:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.502 11:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.502 [2024-11-15 11:00:09.223568] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.502 11:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.502 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:02.502 "name": "Existed_Raid", 00:15:02.502 "aliases": [ 00:15:02.502 "95c34a9b-fa12-4883-b0fd-d08569656ba5" 00:15:02.502 ], 00:15:02.502 "product_name": "Raid Volume", 00:15:02.502 "block_size": 512, 00:15:02.502 "num_blocks": 131072, 00:15:02.502 "uuid": "95c34a9b-fa12-4883-b0fd-d08569656ba5", 00:15:02.502 "assigned_rate_limits": { 00:15:02.502 "rw_ios_per_sec": 0, 00:15:02.502 "rw_mbytes_per_sec": 0, 00:15:02.502 "r_mbytes_per_sec": 0, 00:15:02.502 "w_mbytes_per_sec": 0 00:15:02.502 }, 00:15:02.502 "claimed": false, 00:15:02.502 "zoned": false, 00:15:02.502 "supported_io_types": { 00:15:02.502 "read": true, 00:15:02.502 "write": true, 00:15:02.502 "unmap": false, 00:15:02.502 "flush": false, 00:15:02.502 "reset": true, 00:15:02.502 "nvme_admin": false, 00:15:02.502 "nvme_io": false, 00:15:02.502 "nvme_io_md": false, 00:15:02.502 "write_zeroes": true, 00:15:02.502 "zcopy": false, 00:15:02.502 "get_zone_info": false, 00:15:02.502 "zone_management": false, 00:15:02.502 "zone_append": false, 00:15:02.502 "compare": false, 00:15:02.502 "compare_and_write": false, 00:15:02.502 "abort": false, 00:15:02.502 "seek_hole": false, 00:15:02.502 "seek_data": false, 00:15:02.502 "copy": false, 00:15:02.502 "nvme_iov_md": false 00:15:02.502 }, 00:15:02.502 "driver_specific": { 00:15:02.502 "raid": { 00:15:02.502 "uuid": "95c34a9b-fa12-4883-b0fd-d08569656ba5", 00:15:02.502 "strip_size_kb": 64, 00:15:02.502 "state": "online", 00:15:02.502 "raid_level": "raid5f", 00:15:02.502 "superblock": false, 00:15:02.502 "num_base_bdevs": 3, 00:15:02.502 "num_base_bdevs_discovered": 3, 00:15:02.502 "num_base_bdevs_operational": 3, 00:15:02.502 "base_bdevs_list": [ 00:15:02.502 { 00:15:02.502 "name": "NewBaseBdev", 00:15:02.502 "uuid": "40e05b1a-6acc-445a-8ef3-6f52dd448bce", 00:15:02.502 "is_configured": true, 00:15:02.502 "data_offset": 0, 00:15:02.502 "data_size": 65536 00:15:02.502 }, 00:15:02.502 { 00:15:02.502 "name": "BaseBdev2", 00:15:02.502 "uuid": "c8f09aed-9db8-4844-af84-f4cdf59604e8", 00:15:02.502 "is_configured": true, 00:15:02.502 "data_offset": 0, 00:15:02.502 "data_size": 65536 00:15:02.502 }, 00:15:02.502 { 00:15:02.502 "name": "BaseBdev3", 00:15:02.502 "uuid": "7f231335-5494-430a-a8a9-00ed8769342d", 00:15:02.502 "is_configured": true, 00:15:02.502 "data_offset": 0, 00:15:02.502 "data_size": 65536 00:15:02.502 } 00:15:02.502 ] 00:15:02.502 } 00:15:02.502 } 00:15:02.502 }' 00:15:02.502 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:02.502 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:02.502 BaseBdev2 00:15:02.502 BaseBdev3' 00:15:02.502 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.502 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:02.502 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.502 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:02.502 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.502 11:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.502 11:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.502 11:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.502 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.502 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.502 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.502 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:02.502 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.503 11:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.503 11:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.762 11:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.762 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.762 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.762 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.762 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:02.762 11:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.762 11:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.762 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.762 11:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.762 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.762 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.762 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:02.762 11:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.762 11:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.762 [2024-11-15 11:00:09.506814] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:02.762 [2024-11-15 11:00:09.506843] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:02.762 [2024-11-15 11:00:09.506924] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.762 [2024-11-15 11:00:09.507221] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:02.762 [2024-11-15 11:00:09.507234] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:02.762 11:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.762 11:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80056 00:15:02.762 11:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 80056 ']' 00:15:02.762 11:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 80056 00:15:02.763 11:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:15:02.763 11:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:02.763 11:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80056 00:15:02.763 11:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:02.763 11:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:02.763 killing process with pid 80056 00:15:02.763 11:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80056' 00:15:02.763 11:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 80056 00:15:02.763 [2024-11-15 11:00:09.551104] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:02.763 11:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 80056 00:15:03.022 [2024-11-15 11:00:09.857197] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:04.423 11:00:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:04.423 00:15:04.423 real 0m10.814s 00:15:04.423 user 0m17.186s 00:15:04.423 sys 0m1.944s 00:15:04.423 11:00:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:04.423 ************************************ 00:15:04.423 END TEST raid5f_state_function_test 00:15:04.423 ************************************ 00:15:04.423 11:00:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.423 11:00:11 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:15:04.423 11:00:11 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:04.423 11:00:11 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:04.423 11:00:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:04.423 ************************************ 00:15:04.423 START TEST raid5f_state_function_test_sb 00:15:04.423 ************************************ 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 true 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80677 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80677' 00:15:04.423 Process raid pid: 80677 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80677 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 80677 ']' 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:04.423 11:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.423 [2024-11-15 11:00:11.148712] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:15:04.423 [2024-11-15 11:00:11.148830] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.423 [2024-11-15 11:00:11.323412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.682 [2024-11-15 11:00:11.436843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.940 [2024-11-15 11:00:11.638559] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:04.940 [2024-11-15 11:00:11.638599] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:05.199 11:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:05.199 11:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:15:05.199 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:05.199 11:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.199 11:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.199 [2024-11-15 11:00:11.994562] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:05.199 [2024-11-15 11:00:11.994622] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:05.199 [2024-11-15 11:00:11.994633] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:05.199 [2024-11-15 11:00:11.994643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:05.199 [2024-11-15 11:00:11.994650] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:05.199 [2024-11-15 11:00:11.994658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:05.199 11:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.199 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:05.199 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.199 11:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.199 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.199 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.199 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.199 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.199 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.199 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.199 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.199 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.199 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.199 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.199 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.199 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.199 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.199 "name": "Existed_Raid", 00:15:05.199 "uuid": "5c1292b0-d4d6-44b8-81fe-ebd7220464fa", 00:15:05.199 "strip_size_kb": 64, 00:15:05.199 "state": "configuring", 00:15:05.199 "raid_level": "raid5f", 00:15:05.199 "superblock": true, 00:15:05.199 "num_base_bdevs": 3, 00:15:05.199 "num_base_bdevs_discovered": 0, 00:15:05.199 "num_base_bdevs_operational": 3, 00:15:05.199 "base_bdevs_list": [ 00:15:05.199 { 00:15:05.199 "name": "BaseBdev1", 00:15:05.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.199 "is_configured": false, 00:15:05.199 "data_offset": 0, 00:15:05.199 "data_size": 0 00:15:05.199 }, 00:15:05.199 { 00:15:05.199 "name": "BaseBdev2", 00:15:05.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.199 "is_configured": false, 00:15:05.199 "data_offset": 0, 00:15:05.199 "data_size": 0 00:15:05.199 }, 00:15:05.199 { 00:15:05.199 "name": "BaseBdev3", 00:15:05.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.199 "is_configured": false, 00:15:05.199 "data_offset": 0, 00:15:05.199 "data_size": 0 00:15:05.199 } 00:15:05.199 ] 00:15:05.199 }' 00:15:05.199 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.199 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.769 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:05.769 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.769 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.769 [2024-11-15 11:00:12.465692] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:05.769 [2024-11-15 11:00:12.465791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:05.769 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.769 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:05.769 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.769 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.769 [2024-11-15 11:00:12.477687] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:05.769 [2024-11-15 11:00:12.477772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:05.769 [2024-11-15 11:00:12.477817] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:05.769 [2024-11-15 11:00:12.477840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:05.769 [2024-11-15 11:00:12.477858] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:05.769 [2024-11-15 11:00:12.477879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:05.769 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.769 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:05.769 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.769 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.769 [2024-11-15 11:00:12.526207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:05.769 BaseBdev1 00:15:05.769 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.769 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:05.769 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:05.769 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:05.769 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:05.769 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:05.769 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:05.769 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:05.769 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.769 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.769 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.769 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:05.769 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.769 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.769 [ 00:15:05.769 { 00:15:05.769 "name": "BaseBdev1", 00:15:05.769 "aliases": [ 00:15:05.769 "190a40aa-c108-484f-8797-7ef255628f2e" 00:15:05.769 ], 00:15:05.769 "product_name": "Malloc disk", 00:15:05.769 "block_size": 512, 00:15:05.769 "num_blocks": 65536, 00:15:05.769 "uuid": "190a40aa-c108-484f-8797-7ef255628f2e", 00:15:05.769 "assigned_rate_limits": { 00:15:05.769 "rw_ios_per_sec": 0, 00:15:05.769 "rw_mbytes_per_sec": 0, 00:15:05.769 "r_mbytes_per_sec": 0, 00:15:05.769 "w_mbytes_per_sec": 0 00:15:05.770 }, 00:15:05.770 "claimed": true, 00:15:05.770 "claim_type": "exclusive_write", 00:15:05.770 "zoned": false, 00:15:05.770 "supported_io_types": { 00:15:05.770 "read": true, 00:15:05.770 "write": true, 00:15:05.770 "unmap": true, 00:15:05.770 "flush": true, 00:15:05.770 "reset": true, 00:15:05.770 "nvme_admin": false, 00:15:05.770 "nvme_io": false, 00:15:05.770 "nvme_io_md": false, 00:15:05.770 "write_zeroes": true, 00:15:05.770 "zcopy": true, 00:15:05.770 "get_zone_info": false, 00:15:05.770 "zone_management": false, 00:15:05.770 "zone_append": false, 00:15:05.770 "compare": false, 00:15:05.770 "compare_and_write": false, 00:15:05.770 "abort": true, 00:15:05.770 "seek_hole": false, 00:15:05.770 "seek_data": false, 00:15:05.770 "copy": true, 00:15:05.770 "nvme_iov_md": false 00:15:05.770 }, 00:15:05.770 "memory_domains": [ 00:15:05.770 { 00:15:05.770 "dma_device_id": "system", 00:15:05.770 "dma_device_type": 1 00:15:05.770 }, 00:15:05.770 { 00:15:05.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.770 "dma_device_type": 2 00:15:05.770 } 00:15:05.770 ], 00:15:05.770 "driver_specific": {} 00:15:05.770 } 00:15:05.770 ] 00:15:05.770 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.770 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:05.770 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:05.770 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.770 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.770 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.770 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.770 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.770 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.770 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.770 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.770 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.770 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.770 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.770 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.770 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.770 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.770 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.770 "name": "Existed_Raid", 00:15:05.770 "uuid": "e654e1c2-3b9f-42ac-917c-d79ed9f112a4", 00:15:05.770 "strip_size_kb": 64, 00:15:05.770 "state": "configuring", 00:15:05.770 "raid_level": "raid5f", 00:15:05.770 "superblock": true, 00:15:05.770 "num_base_bdevs": 3, 00:15:05.770 "num_base_bdevs_discovered": 1, 00:15:05.770 "num_base_bdevs_operational": 3, 00:15:05.770 "base_bdevs_list": [ 00:15:05.770 { 00:15:05.770 "name": "BaseBdev1", 00:15:05.770 "uuid": "190a40aa-c108-484f-8797-7ef255628f2e", 00:15:05.770 "is_configured": true, 00:15:05.770 "data_offset": 2048, 00:15:05.770 "data_size": 63488 00:15:05.770 }, 00:15:05.770 { 00:15:05.770 "name": "BaseBdev2", 00:15:05.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.770 "is_configured": false, 00:15:05.770 "data_offset": 0, 00:15:05.770 "data_size": 0 00:15:05.770 }, 00:15:05.770 { 00:15:05.770 "name": "BaseBdev3", 00:15:05.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.770 "is_configured": false, 00:15:05.770 "data_offset": 0, 00:15:05.770 "data_size": 0 00:15:05.770 } 00:15:05.770 ] 00:15:05.770 }' 00:15:05.770 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.770 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.340 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:06.340 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.340 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.340 [2024-11-15 11:00:12.989465] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:06.340 [2024-11-15 11:00:12.989584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:06.340 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.340 11:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:06.340 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.340 11:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.340 [2024-11-15 11:00:13.001507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:06.340 [2024-11-15 11:00:13.003366] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:06.340 [2024-11-15 11:00:13.003409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:06.340 [2024-11-15 11:00:13.003419] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:06.340 [2024-11-15 11:00:13.003428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:06.340 11:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.340 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:06.340 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:06.341 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:06.341 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.341 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.341 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.341 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.341 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.341 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.341 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.341 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.341 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.341 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.341 11:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.341 11:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.341 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.341 11:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.341 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.341 "name": "Existed_Raid", 00:15:06.341 "uuid": "1cc810fd-6323-4a79-9cbd-716d5b964d22", 00:15:06.341 "strip_size_kb": 64, 00:15:06.341 "state": "configuring", 00:15:06.341 "raid_level": "raid5f", 00:15:06.341 "superblock": true, 00:15:06.341 "num_base_bdevs": 3, 00:15:06.341 "num_base_bdevs_discovered": 1, 00:15:06.341 "num_base_bdevs_operational": 3, 00:15:06.341 "base_bdevs_list": [ 00:15:06.341 { 00:15:06.341 "name": "BaseBdev1", 00:15:06.341 "uuid": "190a40aa-c108-484f-8797-7ef255628f2e", 00:15:06.341 "is_configured": true, 00:15:06.341 "data_offset": 2048, 00:15:06.341 "data_size": 63488 00:15:06.341 }, 00:15:06.341 { 00:15:06.341 "name": "BaseBdev2", 00:15:06.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.341 "is_configured": false, 00:15:06.341 "data_offset": 0, 00:15:06.341 "data_size": 0 00:15:06.341 }, 00:15:06.341 { 00:15:06.341 "name": "BaseBdev3", 00:15:06.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.341 "is_configured": false, 00:15:06.341 "data_offset": 0, 00:15:06.341 "data_size": 0 00:15:06.341 } 00:15:06.341 ] 00:15:06.341 }' 00:15:06.341 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.341 11:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.599 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:06.599 11:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.599 11:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.599 [2024-11-15 11:00:13.514924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:06.599 BaseBdev2 00:15:06.599 11:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.599 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:06.599 11:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:06.599 11:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:06.599 11:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:06.599 11:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:06.599 11:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:06.599 11:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:06.599 11:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.599 11:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.857 11:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.857 11:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:06.857 11:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.857 11:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.857 [ 00:15:06.857 { 00:15:06.857 "name": "BaseBdev2", 00:15:06.857 "aliases": [ 00:15:06.857 "26e90193-f0de-49fe-8f26-0fc2922cb8d0" 00:15:06.857 ], 00:15:06.857 "product_name": "Malloc disk", 00:15:06.857 "block_size": 512, 00:15:06.857 "num_blocks": 65536, 00:15:06.857 "uuid": "26e90193-f0de-49fe-8f26-0fc2922cb8d0", 00:15:06.857 "assigned_rate_limits": { 00:15:06.857 "rw_ios_per_sec": 0, 00:15:06.857 "rw_mbytes_per_sec": 0, 00:15:06.857 "r_mbytes_per_sec": 0, 00:15:06.857 "w_mbytes_per_sec": 0 00:15:06.857 }, 00:15:06.857 "claimed": true, 00:15:06.857 "claim_type": "exclusive_write", 00:15:06.857 "zoned": false, 00:15:06.857 "supported_io_types": { 00:15:06.857 "read": true, 00:15:06.857 "write": true, 00:15:06.857 "unmap": true, 00:15:06.857 "flush": true, 00:15:06.857 "reset": true, 00:15:06.857 "nvme_admin": false, 00:15:06.857 "nvme_io": false, 00:15:06.857 "nvme_io_md": false, 00:15:06.857 "write_zeroes": true, 00:15:06.857 "zcopy": true, 00:15:06.857 "get_zone_info": false, 00:15:06.857 "zone_management": false, 00:15:06.857 "zone_append": false, 00:15:06.857 "compare": false, 00:15:06.857 "compare_and_write": false, 00:15:06.857 "abort": true, 00:15:06.857 "seek_hole": false, 00:15:06.857 "seek_data": false, 00:15:06.857 "copy": true, 00:15:06.857 "nvme_iov_md": false 00:15:06.857 }, 00:15:06.857 "memory_domains": [ 00:15:06.857 { 00:15:06.857 "dma_device_id": "system", 00:15:06.857 "dma_device_type": 1 00:15:06.857 }, 00:15:06.857 { 00:15:06.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.857 "dma_device_type": 2 00:15:06.857 } 00:15:06.857 ], 00:15:06.857 "driver_specific": {} 00:15:06.857 } 00:15:06.857 ] 00:15:06.857 11:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.857 11:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:06.857 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:06.857 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:06.857 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:06.857 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.857 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.857 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.858 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.858 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.858 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.858 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.858 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.858 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.858 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.858 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.858 11:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.858 11:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.858 11:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.858 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.858 "name": "Existed_Raid", 00:15:06.858 "uuid": "1cc810fd-6323-4a79-9cbd-716d5b964d22", 00:15:06.858 "strip_size_kb": 64, 00:15:06.858 "state": "configuring", 00:15:06.858 "raid_level": "raid5f", 00:15:06.858 "superblock": true, 00:15:06.858 "num_base_bdevs": 3, 00:15:06.858 "num_base_bdevs_discovered": 2, 00:15:06.858 "num_base_bdevs_operational": 3, 00:15:06.858 "base_bdevs_list": [ 00:15:06.858 { 00:15:06.858 "name": "BaseBdev1", 00:15:06.858 "uuid": "190a40aa-c108-484f-8797-7ef255628f2e", 00:15:06.858 "is_configured": true, 00:15:06.858 "data_offset": 2048, 00:15:06.858 "data_size": 63488 00:15:06.858 }, 00:15:06.858 { 00:15:06.858 "name": "BaseBdev2", 00:15:06.858 "uuid": "26e90193-f0de-49fe-8f26-0fc2922cb8d0", 00:15:06.858 "is_configured": true, 00:15:06.858 "data_offset": 2048, 00:15:06.858 "data_size": 63488 00:15:06.858 }, 00:15:06.858 { 00:15:06.858 "name": "BaseBdev3", 00:15:06.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.858 "is_configured": false, 00:15:06.858 "data_offset": 0, 00:15:06.858 "data_size": 0 00:15:06.858 } 00:15:06.858 ] 00:15:06.858 }' 00:15:06.858 11:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.858 11:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.115 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:07.115 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.115 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.375 [2024-11-15 11:00:14.080837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:07.375 [2024-11-15 11:00:14.081175] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:07.375 [2024-11-15 11:00:14.081204] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:07.375 [2024-11-15 11:00:14.081534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:07.375 BaseBdev3 00:15:07.375 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.375 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:07.375 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:07.375 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:07.375 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:07.375 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:07.375 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:07.375 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:07.375 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.375 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.375 [2024-11-15 11:00:14.087456] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:07.375 [2024-11-15 11:00:14.087515] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:07.375 [2024-11-15 11:00:14.087737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.375 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.375 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:07.375 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.375 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.375 [ 00:15:07.375 { 00:15:07.375 "name": "BaseBdev3", 00:15:07.375 "aliases": [ 00:15:07.375 "d79d4e9b-1b64-430a-9b9a-b4e4f64dcd42" 00:15:07.375 ], 00:15:07.375 "product_name": "Malloc disk", 00:15:07.375 "block_size": 512, 00:15:07.375 "num_blocks": 65536, 00:15:07.375 "uuid": "d79d4e9b-1b64-430a-9b9a-b4e4f64dcd42", 00:15:07.375 "assigned_rate_limits": { 00:15:07.375 "rw_ios_per_sec": 0, 00:15:07.375 "rw_mbytes_per_sec": 0, 00:15:07.376 "r_mbytes_per_sec": 0, 00:15:07.376 "w_mbytes_per_sec": 0 00:15:07.376 }, 00:15:07.376 "claimed": true, 00:15:07.376 "claim_type": "exclusive_write", 00:15:07.376 "zoned": false, 00:15:07.376 "supported_io_types": { 00:15:07.376 "read": true, 00:15:07.376 "write": true, 00:15:07.376 "unmap": true, 00:15:07.376 "flush": true, 00:15:07.376 "reset": true, 00:15:07.376 "nvme_admin": false, 00:15:07.376 "nvme_io": false, 00:15:07.376 "nvme_io_md": false, 00:15:07.376 "write_zeroes": true, 00:15:07.376 "zcopy": true, 00:15:07.376 "get_zone_info": false, 00:15:07.376 "zone_management": false, 00:15:07.376 "zone_append": false, 00:15:07.376 "compare": false, 00:15:07.376 "compare_and_write": false, 00:15:07.376 "abort": true, 00:15:07.376 "seek_hole": false, 00:15:07.376 "seek_data": false, 00:15:07.376 "copy": true, 00:15:07.376 "nvme_iov_md": false 00:15:07.376 }, 00:15:07.376 "memory_domains": [ 00:15:07.376 { 00:15:07.376 "dma_device_id": "system", 00:15:07.376 "dma_device_type": 1 00:15:07.376 }, 00:15:07.376 { 00:15:07.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.376 "dma_device_type": 2 00:15:07.376 } 00:15:07.376 ], 00:15:07.376 "driver_specific": {} 00:15:07.376 } 00:15:07.376 ] 00:15:07.376 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.376 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:07.376 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:07.376 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:07.376 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:07.376 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:07.376 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.376 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:07.376 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.376 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:07.376 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.376 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.376 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.376 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.376 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.376 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.376 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.376 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.376 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.376 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.376 "name": "Existed_Raid", 00:15:07.376 "uuid": "1cc810fd-6323-4a79-9cbd-716d5b964d22", 00:15:07.376 "strip_size_kb": 64, 00:15:07.376 "state": "online", 00:15:07.376 "raid_level": "raid5f", 00:15:07.376 "superblock": true, 00:15:07.376 "num_base_bdevs": 3, 00:15:07.376 "num_base_bdevs_discovered": 3, 00:15:07.376 "num_base_bdevs_operational": 3, 00:15:07.376 "base_bdevs_list": [ 00:15:07.376 { 00:15:07.376 "name": "BaseBdev1", 00:15:07.376 "uuid": "190a40aa-c108-484f-8797-7ef255628f2e", 00:15:07.376 "is_configured": true, 00:15:07.376 "data_offset": 2048, 00:15:07.376 "data_size": 63488 00:15:07.376 }, 00:15:07.376 { 00:15:07.376 "name": "BaseBdev2", 00:15:07.376 "uuid": "26e90193-f0de-49fe-8f26-0fc2922cb8d0", 00:15:07.376 "is_configured": true, 00:15:07.376 "data_offset": 2048, 00:15:07.376 "data_size": 63488 00:15:07.376 }, 00:15:07.376 { 00:15:07.376 "name": "BaseBdev3", 00:15:07.376 "uuid": "d79d4e9b-1b64-430a-9b9a-b4e4f64dcd42", 00:15:07.376 "is_configured": true, 00:15:07.376 "data_offset": 2048, 00:15:07.376 "data_size": 63488 00:15:07.376 } 00:15:07.376 ] 00:15:07.376 }' 00:15:07.376 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.376 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.945 [2024-11-15 11:00:14.585437] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:07.945 "name": "Existed_Raid", 00:15:07.945 "aliases": [ 00:15:07.945 "1cc810fd-6323-4a79-9cbd-716d5b964d22" 00:15:07.945 ], 00:15:07.945 "product_name": "Raid Volume", 00:15:07.945 "block_size": 512, 00:15:07.945 "num_blocks": 126976, 00:15:07.945 "uuid": "1cc810fd-6323-4a79-9cbd-716d5b964d22", 00:15:07.945 "assigned_rate_limits": { 00:15:07.945 "rw_ios_per_sec": 0, 00:15:07.945 "rw_mbytes_per_sec": 0, 00:15:07.945 "r_mbytes_per_sec": 0, 00:15:07.945 "w_mbytes_per_sec": 0 00:15:07.945 }, 00:15:07.945 "claimed": false, 00:15:07.945 "zoned": false, 00:15:07.945 "supported_io_types": { 00:15:07.945 "read": true, 00:15:07.945 "write": true, 00:15:07.945 "unmap": false, 00:15:07.945 "flush": false, 00:15:07.945 "reset": true, 00:15:07.945 "nvme_admin": false, 00:15:07.945 "nvme_io": false, 00:15:07.945 "nvme_io_md": false, 00:15:07.945 "write_zeroes": true, 00:15:07.945 "zcopy": false, 00:15:07.945 "get_zone_info": false, 00:15:07.945 "zone_management": false, 00:15:07.945 "zone_append": false, 00:15:07.945 "compare": false, 00:15:07.945 "compare_and_write": false, 00:15:07.945 "abort": false, 00:15:07.945 "seek_hole": false, 00:15:07.945 "seek_data": false, 00:15:07.945 "copy": false, 00:15:07.945 "nvme_iov_md": false 00:15:07.945 }, 00:15:07.945 "driver_specific": { 00:15:07.945 "raid": { 00:15:07.945 "uuid": "1cc810fd-6323-4a79-9cbd-716d5b964d22", 00:15:07.945 "strip_size_kb": 64, 00:15:07.945 "state": "online", 00:15:07.945 "raid_level": "raid5f", 00:15:07.945 "superblock": true, 00:15:07.945 "num_base_bdevs": 3, 00:15:07.945 "num_base_bdevs_discovered": 3, 00:15:07.945 "num_base_bdevs_operational": 3, 00:15:07.945 "base_bdevs_list": [ 00:15:07.945 { 00:15:07.945 "name": "BaseBdev1", 00:15:07.945 "uuid": "190a40aa-c108-484f-8797-7ef255628f2e", 00:15:07.945 "is_configured": true, 00:15:07.945 "data_offset": 2048, 00:15:07.945 "data_size": 63488 00:15:07.945 }, 00:15:07.945 { 00:15:07.945 "name": "BaseBdev2", 00:15:07.945 "uuid": "26e90193-f0de-49fe-8f26-0fc2922cb8d0", 00:15:07.945 "is_configured": true, 00:15:07.945 "data_offset": 2048, 00:15:07.945 "data_size": 63488 00:15:07.945 }, 00:15:07.945 { 00:15:07.945 "name": "BaseBdev3", 00:15:07.945 "uuid": "d79d4e9b-1b64-430a-9b9a-b4e4f64dcd42", 00:15:07.945 "is_configured": true, 00:15:07.945 "data_offset": 2048, 00:15:07.945 "data_size": 63488 00:15:07.945 } 00:15:07.945 ] 00:15:07.945 } 00:15:07.945 } 00:15:07.945 }' 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:07.945 BaseBdev2 00:15:07.945 BaseBdev3' 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.945 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.945 [2024-11-15 11:00:14.848783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:08.204 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.204 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:08.204 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:08.204 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:08.204 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:08.204 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:08.204 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:08.204 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:08.204 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.204 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.204 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.204 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:08.204 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.204 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.204 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.204 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.204 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.204 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.204 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.204 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.204 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.204 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.204 "name": "Existed_Raid", 00:15:08.204 "uuid": "1cc810fd-6323-4a79-9cbd-716d5b964d22", 00:15:08.204 "strip_size_kb": 64, 00:15:08.204 "state": "online", 00:15:08.204 "raid_level": "raid5f", 00:15:08.204 "superblock": true, 00:15:08.204 "num_base_bdevs": 3, 00:15:08.204 "num_base_bdevs_discovered": 2, 00:15:08.204 "num_base_bdevs_operational": 2, 00:15:08.204 "base_bdevs_list": [ 00:15:08.204 { 00:15:08.204 "name": null, 00:15:08.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.204 "is_configured": false, 00:15:08.204 "data_offset": 0, 00:15:08.204 "data_size": 63488 00:15:08.204 }, 00:15:08.204 { 00:15:08.204 "name": "BaseBdev2", 00:15:08.204 "uuid": "26e90193-f0de-49fe-8f26-0fc2922cb8d0", 00:15:08.204 "is_configured": true, 00:15:08.204 "data_offset": 2048, 00:15:08.204 "data_size": 63488 00:15:08.204 }, 00:15:08.204 { 00:15:08.204 "name": "BaseBdev3", 00:15:08.204 "uuid": "d79d4e9b-1b64-430a-9b9a-b4e4f64dcd42", 00:15:08.204 "is_configured": true, 00:15:08.204 "data_offset": 2048, 00:15:08.204 "data_size": 63488 00:15:08.204 } 00:15:08.204 ] 00:15:08.204 }' 00:15:08.204 11:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.204 11:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.773 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:08.773 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:08.773 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.773 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:08.773 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.773 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.773 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.773 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:08.773 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:08.773 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:08.773 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.773 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.773 [2024-11-15 11:00:15.488486] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:08.773 [2024-11-15 11:00:15.488644] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:08.773 [2024-11-15 11:00:15.585503] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:08.773 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.773 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:08.773 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:08.773 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.773 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:08.774 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.774 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.774 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.774 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:08.774 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:08.774 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:08.774 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.774 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.774 [2024-11-15 11:00:15.641467] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:08.774 [2024-11-15 11:00:15.641520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.033 BaseBdev2 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.033 [ 00:15:09.033 { 00:15:09.033 "name": "BaseBdev2", 00:15:09.033 "aliases": [ 00:15:09.033 "252d7623-4d89-4809-a305-228a0ed069df" 00:15:09.033 ], 00:15:09.033 "product_name": "Malloc disk", 00:15:09.033 "block_size": 512, 00:15:09.033 "num_blocks": 65536, 00:15:09.033 "uuid": "252d7623-4d89-4809-a305-228a0ed069df", 00:15:09.033 "assigned_rate_limits": { 00:15:09.033 "rw_ios_per_sec": 0, 00:15:09.033 "rw_mbytes_per_sec": 0, 00:15:09.033 "r_mbytes_per_sec": 0, 00:15:09.033 "w_mbytes_per_sec": 0 00:15:09.033 }, 00:15:09.033 "claimed": false, 00:15:09.033 "zoned": false, 00:15:09.033 "supported_io_types": { 00:15:09.033 "read": true, 00:15:09.033 "write": true, 00:15:09.033 "unmap": true, 00:15:09.033 "flush": true, 00:15:09.033 "reset": true, 00:15:09.033 "nvme_admin": false, 00:15:09.033 "nvme_io": false, 00:15:09.033 "nvme_io_md": false, 00:15:09.033 "write_zeroes": true, 00:15:09.033 "zcopy": true, 00:15:09.033 "get_zone_info": false, 00:15:09.033 "zone_management": false, 00:15:09.033 "zone_append": false, 00:15:09.033 "compare": false, 00:15:09.033 "compare_and_write": false, 00:15:09.033 "abort": true, 00:15:09.033 "seek_hole": false, 00:15:09.033 "seek_data": false, 00:15:09.033 "copy": true, 00:15:09.033 "nvme_iov_md": false 00:15:09.033 }, 00:15:09.033 "memory_domains": [ 00:15:09.033 { 00:15:09.033 "dma_device_id": "system", 00:15:09.033 "dma_device_type": 1 00:15:09.033 }, 00:15:09.033 { 00:15:09.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.033 "dma_device_type": 2 00:15:09.033 } 00:15:09.033 ], 00:15:09.033 "driver_specific": {} 00:15:09.033 } 00:15:09.033 ] 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.033 BaseBdev3 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:09.033 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.034 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.034 [ 00:15:09.034 { 00:15:09.034 "name": "BaseBdev3", 00:15:09.034 "aliases": [ 00:15:09.034 "4cebb15e-ec5f-4a80-8e27-5f531ab428d7" 00:15:09.034 ], 00:15:09.034 "product_name": "Malloc disk", 00:15:09.034 "block_size": 512, 00:15:09.034 "num_blocks": 65536, 00:15:09.034 "uuid": "4cebb15e-ec5f-4a80-8e27-5f531ab428d7", 00:15:09.034 "assigned_rate_limits": { 00:15:09.034 "rw_ios_per_sec": 0, 00:15:09.034 "rw_mbytes_per_sec": 0, 00:15:09.034 "r_mbytes_per_sec": 0, 00:15:09.034 "w_mbytes_per_sec": 0 00:15:09.034 }, 00:15:09.034 "claimed": false, 00:15:09.034 "zoned": false, 00:15:09.034 "supported_io_types": { 00:15:09.034 "read": true, 00:15:09.034 "write": true, 00:15:09.034 "unmap": true, 00:15:09.034 "flush": true, 00:15:09.034 "reset": true, 00:15:09.034 "nvme_admin": false, 00:15:09.034 "nvme_io": false, 00:15:09.034 "nvme_io_md": false, 00:15:09.034 "write_zeroes": true, 00:15:09.034 "zcopy": true, 00:15:09.034 "get_zone_info": false, 00:15:09.034 "zone_management": false, 00:15:09.034 "zone_append": false, 00:15:09.034 "compare": false, 00:15:09.034 "compare_and_write": false, 00:15:09.034 "abort": true, 00:15:09.034 "seek_hole": false, 00:15:09.034 "seek_data": false, 00:15:09.034 "copy": true, 00:15:09.034 "nvme_iov_md": false 00:15:09.034 }, 00:15:09.034 "memory_domains": [ 00:15:09.034 { 00:15:09.034 "dma_device_id": "system", 00:15:09.034 "dma_device_type": 1 00:15:09.034 }, 00:15:09.034 { 00:15:09.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.034 "dma_device_type": 2 00:15:09.034 } 00:15:09.034 ], 00:15:09.034 "driver_specific": {} 00:15:09.034 } 00:15:09.034 ] 00:15:09.034 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.034 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:09.034 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:09.034 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:09.034 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:09.034 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.034 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.034 [2024-11-15 11:00:15.945874] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:09.034 [2024-11-15 11:00:15.945956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:09.034 [2024-11-15 11:00:15.946011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:09.034 [2024-11-15 11:00:15.947762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:09.034 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.034 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:09.034 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:09.034 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.034 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:09.034 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.034 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.034 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.034 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.034 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.034 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.034 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.034 11:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.292 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.292 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.292 11:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.292 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.292 "name": "Existed_Raid", 00:15:09.292 "uuid": "2864b44c-2e4c-46b4-a7ac-82c9603fd190", 00:15:09.292 "strip_size_kb": 64, 00:15:09.292 "state": "configuring", 00:15:09.292 "raid_level": "raid5f", 00:15:09.292 "superblock": true, 00:15:09.292 "num_base_bdevs": 3, 00:15:09.292 "num_base_bdevs_discovered": 2, 00:15:09.292 "num_base_bdevs_operational": 3, 00:15:09.292 "base_bdevs_list": [ 00:15:09.292 { 00:15:09.292 "name": "BaseBdev1", 00:15:09.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.292 "is_configured": false, 00:15:09.292 "data_offset": 0, 00:15:09.292 "data_size": 0 00:15:09.292 }, 00:15:09.292 { 00:15:09.292 "name": "BaseBdev2", 00:15:09.292 "uuid": "252d7623-4d89-4809-a305-228a0ed069df", 00:15:09.292 "is_configured": true, 00:15:09.292 "data_offset": 2048, 00:15:09.292 "data_size": 63488 00:15:09.292 }, 00:15:09.292 { 00:15:09.292 "name": "BaseBdev3", 00:15:09.292 "uuid": "4cebb15e-ec5f-4a80-8e27-5f531ab428d7", 00:15:09.292 "is_configured": true, 00:15:09.292 "data_offset": 2048, 00:15:09.292 "data_size": 63488 00:15:09.292 } 00:15:09.292 ] 00:15:09.292 }' 00:15:09.292 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.292 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.551 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:09.551 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.551 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.551 [2024-11-15 11:00:16.381199] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:09.551 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.551 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:09.551 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:09.551 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.551 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:09.551 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.551 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.552 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.552 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.552 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.552 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.552 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.552 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.552 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.552 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.552 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.552 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.552 "name": "Existed_Raid", 00:15:09.552 "uuid": "2864b44c-2e4c-46b4-a7ac-82c9603fd190", 00:15:09.552 "strip_size_kb": 64, 00:15:09.552 "state": "configuring", 00:15:09.552 "raid_level": "raid5f", 00:15:09.552 "superblock": true, 00:15:09.552 "num_base_bdevs": 3, 00:15:09.552 "num_base_bdevs_discovered": 1, 00:15:09.552 "num_base_bdevs_operational": 3, 00:15:09.552 "base_bdevs_list": [ 00:15:09.552 { 00:15:09.552 "name": "BaseBdev1", 00:15:09.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.552 "is_configured": false, 00:15:09.552 "data_offset": 0, 00:15:09.552 "data_size": 0 00:15:09.552 }, 00:15:09.552 { 00:15:09.552 "name": null, 00:15:09.552 "uuid": "252d7623-4d89-4809-a305-228a0ed069df", 00:15:09.552 "is_configured": false, 00:15:09.552 "data_offset": 0, 00:15:09.552 "data_size": 63488 00:15:09.552 }, 00:15:09.552 { 00:15:09.552 "name": "BaseBdev3", 00:15:09.552 "uuid": "4cebb15e-ec5f-4a80-8e27-5f531ab428d7", 00:15:09.552 "is_configured": true, 00:15:09.552 "data_offset": 2048, 00:15:09.552 "data_size": 63488 00:15:09.552 } 00:15:09.552 ] 00:15:09.552 }' 00:15:09.552 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.552 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.120 [2024-11-15 11:00:16.945650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.120 BaseBdev1 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.120 [ 00:15:10.120 { 00:15:10.120 "name": "BaseBdev1", 00:15:10.120 "aliases": [ 00:15:10.120 "b0211954-cb39-4e37-b15a-1ed173d34200" 00:15:10.120 ], 00:15:10.120 "product_name": "Malloc disk", 00:15:10.120 "block_size": 512, 00:15:10.120 "num_blocks": 65536, 00:15:10.120 "uuid": "b0211954-cb39-4e37-b15a-1ed173d34200", 00:15:10.120 "assigned_rate_limits": { 00:15:10.120 "rw_ios_per_sec": 0, 00:15:10.120 "rw_mbytes_per_sec": 0, 00:15:10.120 "r_mbytes_per_sec": 0, 00:15:10.120 "w_mbytes_per_sec": 0 00:15:10.120 }, 00:15:10.120 "claimed": true, 00:15:10.120 "claim_type": "exclusive_write", 00:15:10.120 "zoned": false, 00:15:10.120 "supported_io_types": { 00:15:10.120 "read": true, 00:15:10.120 "write": true, 00:15:10.120 "unmap": true, 00:15:10.120 "flush": true, 00:15:10.120 "reset": true, 00:15:10.120 "nvme_admin": false, 00:15:10.120 "nvme_io": false, 00:15:10.120 "nvme_io_md": false, 00:15:10.120 "write_zeroes": true, 00:15:10.120 "zcopy": true, 00:15:10.120 "get_zone_info": false, 00:15:10.120 "zone_management": false, 00:15:10.120 "zone_append": false, 00:15:10.120 "compare": false, 00:15:10.120 "compare_and_write": false, 00:15:10.120 "abort": true, 00:15:10.120 "seek_hole": false, 00:15:10.120 "seek_data": false, 00:15:10.120 "copy": true, 00:15:10.120 "nvme_iov_md": false 00:15:10.120 }, 00:15:10.120 "memory_domains": [ 00:15:10.120 { 00:15:10.120 "dma_device_id": "system", 00:15:10.120 "dma_device_type": 1 00:15:10.120 }, 00:15:10.120 { 00:15:10.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.120 "dma_device_type": 2 00:15:10.120 } 00:15:10.120 ], 00:15:10.120 "driver_specific": {} 00:15:10.120 } 00:15:10.120 ] 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.120 11:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.120 11:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.120 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.120 "name": "Existed_Raid", 00:15:10.120 "uuid": "2864b44c-2e4c-46b4-a7ac-82c9603fd190", 00:15:10.120 "strip_size_kb": 64, 00:15:10.120 "state": "configuring", 00:15:10.120 "raid_level": "raid5f", 00:15:10.120 "superblock": true, 00:15:10.120 "num_base_bdevs": 3, 00:15:10.120 "num_base_bdevs_discovered": 2, 00:15:10.120 "num_base_bdevs_operational": 3, 00:15:10.120 "base_bdevs_list": [ 00:15:10.120 { 00:15:10.120 "name": "BaseBdev1", 00:15:10.120 "uuid": "b0211954-cb39-4e37-b15a-1ed173d34200", 00:15:10.120 "is_configured": true, 00:15:10.120 "data_offset": 2048, 00:15:10.120 "data_size": 63488 00:15:10.120 }, 00:15:10.120 { 00:15:10.120 "name": null, 00:15:10.120 "uuid": "252d7623-4d89-4809-a305-228a0ed069df", 00:15:10.120 "is_configured": false, 00:15:10.120 "data_offset": 0, 00:15:10.120 "data_size": 63488 00:15:10.120 }, 00:15:10.120 { 00:15:10.120 "name": "BaseBdev3", 00:15:10.120 "uuid": "4cebb15e-ec5f-4a80-8e27-5f531ab428d7", 00:15:10.120 "is_configured": true, 00:15:10.120 "data_offset": 2048, 00:15:10.120 "data_size": 63488 00:15:10.120 } 00:15:10.120 ] 00:15:10.120 }' 00:15:10.120 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.120 11:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.688 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.688 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:10.688 11:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.688 11:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.688 11:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.688 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:10.688 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:10.688 11:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.688 11:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.688 [2024-11-15 11:00:17.488769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:10.688 11:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.688 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:10.688 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.688 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.688 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.688 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.688 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.688 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.688 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.688 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.688 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.688 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.688 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.688 11:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.688 11:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.688 11:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.688 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.688 "name": "Existed_Raid", 00:15:10.688 "uuid": "2864b44c-2e4c-46b4-a7ac-82c9603fd190", 00:15:10.688 "strip_size_kb": 64, 00:15:10.688 "state": "configuring", 00:15:10.688 "raid_level": "raid5f", 00:15:10.688 "superblock": true, 00:15:10.688 "num_base_bdevs": 3, 00:15:10.689 "num_base_bdevs_discovered": 1, 00:15:10.689 "num_base_bdevs_operational": 3, 00:15:10.689 "base_bdevs_list": [ 00:15:10.689 { 00:15:10.689 "name": "BaseBdev1", 00:15:10.689 "uuid": "b0211954-cb39-4e37-b15a-1ed173d34200", 00:15:10.689 "is_configured": true, 00:15:10.689 "data_offset": 2048, 00:15:10.689 "data_size": 63488 00:15:10.689 }, 00:15:10.689 { 00:15:10.689 "name": null, 00:15:10.689 "uuid": "252d7623-4d89-4809-a305-228a0ed069df", 00:15:10.689 "is_configured": false, 00:15:10.689 "data_offset": 0, 00:15:10.689 "data_size": 63488 00:15:10.689 }, 00:15:10.689 { 00:15:10.689 "name": null, 00:15:10.689 "uuid": "4cebb15e-ec5f-4a80-8e27-5f531ab428d7", 00:15:10.689 "is_configured": false, 00:15:10.689 "data_offset": 0, 00:15:10.689 "data_size": 63488 00:15:10.689 } 00:15:10.689 ] 00:15:10.689 }' 00:15:10.689 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.689 11:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.257 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.257 11:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.257 11:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.257 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:11.257 11:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.257 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:11.257 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:11.257 11:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.257 11:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.257 [2024-11-15 11:00:17.964051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:11.257 11:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.257 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:11.257 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:11.257 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.257 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.258 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.258 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.258 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.258 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.258 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.258 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.258 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.258 11:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.258 11:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.258 11:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.258 11:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.258 11:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.258 "name": "Existed_Raid", 00:15:11.258 "uuid": "2864b44c-2e4c-46b4-a7ac-82c9603fd190", 00:15:11.258 "strip_size_kb": 64, 00:15:11.258 "state": "configuring", 00:15:11.258 "raid_level": "raid5f", 00:15:11.258 "superblock": true, 00:15:11.258 "num_base_bdevs": 3, 00:15:11.258 "num_base_bdevs_discovered": 2, 00:15:11.258 "num_base_bdevs_operational": 3, 00:15:11.258 "base_bdevs_list": [ 00:15:11.258 { 00:15:11.258 "name": "BaseBdev1", 00:15:11.258 "uuid": "b0211954-cb39-4e37-b15a-1ed173d34200", 00:15:11.258 "is_configured": true, 00:15:11.258 "data_offset": 2048, 00:15:11.258 "data_size": 63488 00:15:11.258 }, 00:15:11.258 { 00:15:11.258 "name": null, 00:15:11.258 "uuid": "252d7623-4d89-4809-a305-228a0ed069df", 00:15:11.258 "is_configured": false, 00:15:11.258 "data_offset": 0, 00:15:11.258 "data_size": 63488 00:15:11.258 }, 00:15:11.258 { 00:15:11.258 "name": "BaseBdev3", 00:15:11.258 "uuid": "4cebb15e-ec5f-4a80-8e27-5f531ab428d7", 00:15:11.258 "is_configured": true, 00:15:11.258 "data_offset": 2048, 00:15:11.258 "data_size": 63488 00:15:11.258 } 00:15:11.258 ] 00:15:11.258 }' 00:15:11.258 11:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.258 11:00:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.516 11:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.517 11:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:11.517 11:00:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.517 11:00:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.776 11:00:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.776 11:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:11.776 11:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:11.776 11:00:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.776 11:00:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.776 [2024-11-15 11:00:18.479208] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:11.776 11:00:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.776 11:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:11.776 11:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:11.776 11:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.776 11:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.776 11:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.776 11:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.776 11:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.776 11:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.776 11:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.776 11:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.776 11:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.776 11:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.776 11:00:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.776 11:00:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.776 11:00:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.776 11:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.776 "name": "Existed_Raid", 00:15:11.776 "uuid": "2864b44c-2e4c-46b4-a7ac-82c9603fd190", 00:15:11.776 "strip_size_kb": 64, 00:15:11.776 "state": "configuring", 00:15:11.776 "raid_level": "raid5f", 00:15:11.776 "superblock": true, 00:15:11.776 "num_base_bdevs": 3, 00:15:11.776 "num_base_bdevs_discovered": 1, 00:15:11.776 "num_base_bdevs_operational": 3, 00:15:11.776 "base_bdevs_list": [ 00:15:11.776 { 00:15:11.776 "name": null, 00:15:11.776 "uuid": "b0211954-cb39-4e37-b15a-1ed173d34200", 00:15:11.776 "is_configured": false, 00:15:11.776 "data_offset": 0, 00:15:11.776 "data_size": 63488 00:15:11.776 }, 00:15:11.776 { 00:15:11.776 "name": null, 00:15:11.776 "uuid": "252d7623-4d89-4809-a305-228a0ed069df", 00:15:11.776 "is_configured": false, 00:15:11.776 "data_offset": 0, 00:15:11.776 "data_size": 63488 00:15:11.776 }, 00:15:11.776 { 00:15:11.776 "name": "BaseBdev3", 00:15:11.776 "uuid": "4cebb15e-ec5f-4a80-8e27-5f531ab428d7", 00:15:11.776 "is_configured": true, 00:15:11.776 "data_offset": 2048, 00:15:11.776 "data_size": 63488 00:15:11.776 } 00:15:11.776 ] 00:15:11.776 }' 00:15:11.776 11:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.776 11:00:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.345 11:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:12.345 11:00:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.345 11:00:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.345 11:00:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.345 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.345 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:12.345 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:12.345 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.345 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.345 [2024-11-15 11:00:19.026296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:12.345 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.345 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:12.345 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.345 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.345 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.345 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.345 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.345 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.345 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.345 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.345 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.345 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.345 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.345 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.345 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.345 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.345 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.345 "name": "Existed_Raid", 00:15:12.345 "uuid": "2864b44c-2e4c-46b4-a7ac-82c9603fd190", 00:15:12.345 "strip_size_kb": 64, 00:15:12.345 "state": "configuring", 00:15:12.345 "raid_level": "raid5f", 00:15:12.345 "superblock": true, 00:15:12.345 "num_base_bdevs": 3, 00:15:12.345 "num_base_bdevs_discovered": 2, 00:15:12.345 "num_base_bdevs_operational": 3, 00:15:12.345 "base_bdevs_list": [ 00:15:12.345 { 00:15:12.345 "name": null, 00:15:12.345 "uuid": "b0211954-cb39-4e37-b15a-1ed173d34200", 00:15:12.345 "is_configured": false, 00:15:12.345 "data_offset": 0, 00:15:12.345 "data_size": 63488 00:15:12.345 }, 00:15:12.345 { 00:15:12.345 "name": "BaseBdev2", 00:15:12.345 "uuid": "252d7623-4d89-4809-a305-228a0ed069df", 00:15:12.345 "is_configured": true, 00:15:12.345 "data_offset": 2048, 00:15:12.345 "data_size": 63488 00:15:12.345 }, 00:15:12.345 { 00:15:12.345 "name": "BaseBdev3", 00:15:12.345 "uuid": "4cebb15e-ec5f-4a80-8e27-5f531ab428d7", 00:15:12.345 "is_configured": true, 00:15:12.345 "data_offset": 2048, 00:15:12.345 "data_size": 63488 00:15:12.345 } 00:15:12.345 ] 00:15:12.345 }' 00:15:12.345 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.345 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.604 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.604 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:12.604 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.604 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.604 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.862 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:12.862 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.862 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.862 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.862 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:12.862 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.862 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b0211954-cb39-4e37-b15a-1ed173d34200 00:15:12.862 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.862 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.862 [2024-11-15 11:00:19.642808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:12.862 [2024-11-15 11:00:19.643102] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:12.862 NewBaseBdev 00:15:12.862 [2024-11-15 11:00:19.643144] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:12.862 [2024-11-15 11:00:19.643428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:12.862 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.862 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:12.862 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:15:12.862 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:12.862 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:12.862 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:12.863 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:12.863 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:12.863 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.863 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.863 [2024-11-15 11:00:19.649439] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:12.863 [2024-11-15 11:00:19.649499] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:12.863 [2024-11-15 11:00:19.649730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.863 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.863 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:12.863 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.863 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.863 [ 00:15:12.863 { 00:15:12.863 "name": "NewBaseBdev", 00:15:12.863 "aliases": [ 00:15:12.863 "b0211954-cb39-4e37-b15a-1ed173d34200" 00:15:12.863 ], 00:15:12.863 "product_name": "Malloc disk", 00:15:12.863 "block_size": 512, 00:15:12.863 "num_blocks": 65536, 00:15:12.863 "uuid": "b0211954-cb39-4e37-b15a-1ed173d34200", 00:15:12.863 "assigned_rate_limits": { 00:15:12.863 "rw_ios_per_sec": 0, 00:15:12.863 "rw_mbytes_per_sec": 0, 00:15:12.863 "r_mbytes_per_sec": 0, 00:15:12.863 "w_mbytes_per_sec": 0 00:15:12.863 }, 00:15:12.863 "claimed": true, 00:15:12.863 "claim_type": "exclusive_write", 00:15:12.863 "zoned": false, 00:15:12.863 "supported_io_types": { 00:15:12.863 "read": true, 00:15:12.863 "write": true, 00:15:12.863 "unmap": true, 00:15:12.863 "flush": true, 00:15:12.863 "reset": true, 00:15:12.863 "nvme_admin": false, 00:15:12.863 "nvme_io": false, 00:15:12.863 "nvme_io_md": false, 00:15:12.863 "write_zeroes": true, 00:15:12.863 "zcopy": true, 00:15:12.863 "get_zone_info": false, 00:15:12.863 "zone_management": false, 00:15:12.863 "zone_append": false, 00:15:12.863 "compare": false, 00:15:12.863 "compare_and_write": false, 00:15:12.863 "abort": true, 00:15:12.863 "seek_hole": false, 00:15:12.863 "seek_data": false, 00:15:12.863 "copy": true, 00:15:12.863 "nvme_iov_md": false 00:15:12.863 }, 00:15:12.863 "memory_domains": [ 00:15:12.863 { 00:15:12.863 "dma_device_id": "system", 00:15:12.863 "dma_device_type": 1 00:15:12.863 }, 00:15:12.863 { 00:15:12.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.863 "dma_device_type": 2 00:15:12.863 } 00:15:12.863 ], 00:15:12.863 "driver_specific": {} 00:15:12.863 } 00:15:12.863 ] 00:15:12.863 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.863 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:12.863 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:12.863 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.863 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.863 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.863 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.863 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.863 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.863 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.863 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.863 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.863 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.863 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.863 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.863 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.863 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.863 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.863 "name": "Existed_Raid", 00:15:12.863 "uuid": "2864b44c-2e4c-46b4-a7ac-82c9603fd190", 00:15:12.863 "strip_size_kb": 64, 00:15:12.863 "state": "online", 00:15:12.863 "raid_level": "raid5f", 00:15:12.863 "superblock": true, 00:15:12.863 "num_base_bdevs": 3, 00:15:12.863 "num_base_bdevs_discovered": 3, 00:15:12.863 "num_base_bdevs_operational": 3, 00:15:12.863 "base_bdevs_list": [ 00:15:12.863 { 00:15:12.863 "name": "NewBaseBdev", 00:15:12.863 "uuid": "b0211954-cb39-4e37-b15a-1ed173d34200", 00:15:12.863 "is_configured": true, 00:15:12.863 "data_offset": 2048, 00:15:12.863 "data_size": 63488 00:15:12.863 }, 00:15:12.863 { 00:15:12.863 "name": "BaseBdev2", 00:15:12.863 "uuid": "252d7623-4d89-4809-a305-228a0ed069df", 00:15:12.863 "is_configured": true, 00:15:12.863 "data_offset": 2048, 00:15:12.863 "data_size": 63488 00:15:12.863 }, 00:15:12.863 { 00:15:12.863 "name": "BaseBdev3", 00:15:12.863 "uuid": "4cebb15e-ec5f-4a80-8e27-5f531ab428d7", 00:15:12.863 "is_configured": true, 00:15:12.863 "data_offset": 2048, 00:15:12.863 "data_size": 63488 00:15:12.863 } 00:15:12.863 ] 00:15:12.863 }' 00:15:12.863 11:00:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.863 11:00:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.430 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:13.430 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:13.430 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:13.430 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:13.430 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:13.430 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:13.430 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:13.430 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:13.430 11:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.430 11:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.430 [2024-11-15 11:00:20.151326] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:13.430 11:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.430 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:13.430 "name": "Existed_Raid", 00:15:13.430 "aliases": [ 00:15:13.430 "2864b44c-2e4c-46b4-a7ac-82c9603fd190" 00:15:13.430 ], 00:15:13.430 "product_name": "Raid Volume", 00:15:13.430 "block_size": 512, 00:15:13.430 "num_blocks": 126976, 00:15:13.430 "uuid": "2864b44c-2e4c-46b4-a7ac-82c9603fd190", 00:15:13.430 "assigned_rate_limits": { 00:15:13.430 "rw_ios_per_sec": 0, 00:15:13.430 "rw_mbytes_per_sec": 0, 00:15:13.430 "r_mbytes_per_sec": 0, 00:15:13.430 "w_mbytes_per_sec": 0 00:15:13.430 }, 00:15:13.430 "claimed": false, 00:15:13.430 "zoned": false, 00:15:13.430 "supported_io_types": { 00:15:13.430 "read": true, 00:15:13.430 "write": true, 00:15:13.430 "unmap": false, 00:15:13.430 "flush": false, 00:15:13.430 "reset": true, 00:15:13.430 "nvme_admin": false, 00:15:13.430 "nvme_io": false, 00:15:13.430 "nvme_io_md": false, 00:15:13.430 "write_zeroes": true, 00:15:13.430 "zcopy": false, 00:15:13.430 "get_zone_info": false, 00:15:13.430 "zone_management": false, 00:15:13.430 "zone_append": false, 00:15:13.430 "compare": false, 00:15:13.430 "compare_and_write": false, 00:15:13.430 "abort": false, 00:15:13.430 "seek_hole": false, 00:15:13.430 "seek_data": false, 00:15:13.430 "copy": false, 00:15:13.430 "nvme_iov_md": false 00:15:13.430 }, 00:15:13.430 "driver_specific": { 00:15:13.430 "raid": { 00:15:13.430 "uuid": "2864b44c-2e4c-46b4-a7ac-82c9603fd190", 00:15:13.430 "strip_size_kb": 64, 00:15:13.430 "state": "online", 00:15:13.430 "raid_level": "raid5f", 00:15:13.430 "superblock": true, 00:15:13.430 "num_base_bdevs": 3, 00:15:13.430 "num_base_bdevs_discovered": 3, 00:15:13.430 "num_base_bdevs_operational": 3, 00:15:13.430 "base_bdevs_list": [ 00:15:13.430 { 00:15:13.430 "name": "NewBaseBdev", 00:15:13.430 "uuid": "b0211954-cb39-4e37-b15a-1ed173d34200", 00:15:13.430 "is_configured": true, 00:15:13.430 "data_offset": 2048, 00:15:13.430 "data_size": 63488 00:15:13.430 }, 00:15:13.430 { 00:15:13.430 "name": "BaseBdev2", 00:15:13.430 "uuid": "252d7623-4d89-4809-a305-228a0ed069df", 00:15:13.430 "is_configured": true, 00:15:13.430 "data_offset": 2048, 00:15:13.430 "data_size": 63488 00:15:13.430 }, 00:15:13.430 { 00:15:13.430 "name": "BaseBdev3", 00:15:13.430 "uuid": "4cebb15e-ec5f-4a80-8e27-5f531ab428d7", 00:15:13.430 "is_configured": true, 00:15:13.430 "data_offset": 2048, 00:15:13.430 "data_size": 63488 00:15:13.430 } 00:15:13.430 ] 00:15:13.430 } 00:15:13.430 } 00:15:13.430 }' 00:15:13.430 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:13.430 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:13.430 BaseBdev2 00:15:13.430 BaseBdev3' 00:15:13.430 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.430 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:13.430 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:13.430 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:13.430 11:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.430 11:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.430 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.430 11:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.430 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:13.430 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:13.430 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:13.431 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:13.431 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.431 11:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.431 11:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.431 11:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.431 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:13.431 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:13.431 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:13.689 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:13.689 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.689 11:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.689 11:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.689 11:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.689 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:13.689 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:13.689 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:13.689 11:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.689 11:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.689 [2024-11-15 11:00:20.414675] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:13.689 [2024-11-15 11:00:20.414706] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:13.690 [2024-11-15 11:00:20.414796] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:13.690 [2024-11-15 11:00:20.415102] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:13.690 [2024-11-15 11:00:20.415116] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:13.690 11:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.690 11:00:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80677 00:15:13.690 11:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 80677 ']' 00:15:13.690 11:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 80677 00:15:13.690 11:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:15:13.690 11:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:13.690 11:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80677 00:15:13.690 11:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:13.690 killing process with pid 80677 00:15:13.690 11:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:13.690 11:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80677' 00:15:13.690 11:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 80677 00:15:13.690 [2024-11-15 11:00:20.457605] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:13.690 11:00:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 80677 00:15:13.949 [2024-11-15 11:00:20.764314] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:15.326 ************************************ 00:15:15.326 END TEST raid5f_state_function_test_sb 00:15:15.326 ************************************ 00:15:15.326 11:00:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:15.326 00:15:15.326 real 0m10.842s 00:15:15.326 user 0m17.295s 00:15:15.326 sys 0m1.954s 00:15:15.326 11:00:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:15.326 11:00:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.326 11:00:21 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:15.326 11:00:21 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:15.326 11:00:21 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:15.326 11:00:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:15.326 ************************************ 00:15:15.326 START TEST raid5f_superblock_test 00:15:15.326 ************************************ 00:15:15.326 11:00:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 3 00:15:15.326 11:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:15.326 11:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:15.326 11:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:15.326 11:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:15.326 11:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:15.326 11:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:15.326 11:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:15.326 11:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:15.326 11:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:15.326 11:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:15.326 11:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:15.326 11:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:15.326 11:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:15.326 11:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:15.326 11:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:15.326 11:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:15.326 11:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81303 00:15:15.326 11:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:15.326 11:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81303 00:15:15.326 11:00:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 81303 ']' 00:15:15.326 11:00:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.326 11:00:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:15.326 11:00:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.327 11:00:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:15.327 11:00:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.327 [2024-11-15 11:00:22.045416] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:15:15.327 [2024-11-15 11:00:22.045615] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81303 ] 00:15:15.327 [2024-11-15 11:00:22.199910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.586 [2024-11-15 11:00:22.316398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.845 [2024-11-15 11:00:22.519992] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:15.845 [2024-11-15 11:00:22.520138] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.104 malloc1 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.104 [2024-11-15 11:00:22.926259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:16.104 [2024-11-15 11:00:22.926383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.104 [2024-11-15 11:00:22.926427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:16.104 [2024-11-15 11:00:22.926478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.104 [2024-11-15 11:00:22.928603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.104 [2024-11-15 11:00:22.928676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:16.104 pt1 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.104 malloc2 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.104 [2024-11-15 11:00:22.985929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:16.104 [2024-11-15 11:00:22.986027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.104 [2024-11-15 11:00:22.986083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:16.104 [2024-11-15 11:00:22.986116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.104 [2024-11-15 11:00:22.988470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.104 [2024-11-15 11:00:22.988549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:16.104 pt2 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.104 11:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.365 malloc3 00:15:16.365 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.365 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:16.365 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.365 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.365 [2024-11-15 11:00:23.056800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:16.365 [2024-11-15 11:00:23.056898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.365 [2024-11-15 11:00:23.056954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:16.365 [2024-11-15 11:00:23.056999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.365 [2024-11-15 11:00:23.059175] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.365 [2024-11-15 11:00:23.059251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:16.365 pt3 00:15:16.365 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.365 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:16.365 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:16.365 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:16.365 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.365 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.365 [2024-11-15 11:00:23.068845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:16.365 [2024-11-15 11:00:23.070759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:16.365 [2024-11-15 11:00:23.070877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:16.365 [2024-11-15 11:00:23.071072] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:16.365 [2024-11-15 11:00:23.071142] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:16.365 [2024-11-15 11:00:23.071484] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:16.365 [2024-11-15 11:00:23.077443] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:16.365 [2024-11-15 11:00:23.077498] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:16.365 [2024-11-15 11:00:23.077741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.365 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.365 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:16.365 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.365 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.365 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.365 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.365 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.365 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.365 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.365 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.365 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.365 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.365 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.365 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.365 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.365 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.365 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.365 "name": "raid_bdev1", 00:15:16.365 "uuid": "a8e82474-c00d-4fdb-b835-16f8d51c3cde", 00:15:16.365 "strip_size_kb": 64, 00:15:16.365 "state": "online", 00:15:16.365 "raid_level": "raid5f", 00:15:16.365 "superblock": true, 00:15:16.365 "num_base_bdevs": 3, 00:15:16.365 "num_base_bdevs_discovered": 3, 00:15:16.365 "num_base_bdevs_operational": 3, 00:15:16.365 "base_bdevs_list": [ 00:15:16.365 { 00:15:16.365 "name": "pt1", 00:15:16.365 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:16.365 "is_configured": true, 00:15:16.365 "data_offset": 2048, 00:15:16.365 "data_size": 63488 00:15:16.365 }, 00:15:16.365 { 00:15:16.365 "name": "pt2", 00:15:16.365 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:16.365 "is_configured": true, 00:15:16.365 "data_offset": 2048, 00:15:16.365 "data_size": 63488 00:15:16.366 }, 00:15:16.366 { 00:15:16.366 "name": "pt3", 00:15:16.366 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:16.366 "is_configured": true, 00:15:16.366 "data_offset": 2048, 00:15:16.366 "data_size": 63488 00:15:16.366 } 00:15:16.366 ] 00:15:16.366 }' 00:15:16.366 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.366 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.625 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:16.625 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:16.625 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:16.625 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:16.625 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:16.625 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:16.625 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:16.625 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.625 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.625 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:16.625 [2024-11-15 11:00:23.495822] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:16.625 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.625 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:16.625 "name": "raid_bdev1", 00:15:16.625 "aliases": [ 00:15:16.625 "a8e82474-c00d-4fdb-b835-16f8d51c3cde" 00:15:16.625 ], 00:15:16.625 "product_name": "Raid Volume", 00:15:16.625 "block_size": 512, 00:15:16.625 "num_blocks": 126976, 00:15:16.626 "uuid": "a8e82474-c00d-4fdb-b835-16f8d51c3cde", 00:15:16.626 "assigned_rate_limits": { 00:15:16.626 "rw_ios_per_sec": 0, 00:15:16.626 "rw_mbytes_per_sec": 0, 00:15:16.626 "r_mbytes_per_sec": 0, 00:15:16.626 "w_mbytes_per_sec": 0 00:15:16.626 }, 00:15:16.626 "claimed": false, 00:15:16.626 "zoned": false, 00:15:16.626 "supported_io_types": { 00:15:16.626 "read": true, 00:15:16.626 "write": true, 00:15:16.626 "unmap": false, 00:15:16.626 "flush": false, 00:15:16.626 "reset": true, 00:15:16.626 "nvme_admin": false, 00:15:16.626 "nvme_io": false, 00:15:16.626 "nvme_io_md": false, 00:15:16.626 "write_zeroes": true, 00:15:16.626 "zcopy": false, 00:15:16.626 "get_zone_info": false, 00:15:16.626 "zone_management": false, 00:15:16.626 "zone_append": false, 00:15:16.626 "compare": false, 00:15:16.626 "compare_and_write": false, 00:15:16.626 "abort": false, 00:15:16.626 "seek_hole": false, 00:15:16.626 "seek_data": false, 00:15:16.626 "copy": false, 00:15:16.626 "nvme_iov_md": false 00:15:16.626 }, 00:15:16.626 "driver_specific": { 00:15:16.626 "raid": { 00:15:16.626 "uuid": "a8e82474-c00d-4fdb-b835-16f8d51c3cde", 00:15:16.626 "strip_size_kb": 64, 00:15:16.626 "state": "online", 00:15:16.626 "raid_level": "raid5f", 00:15:16.626 "superblock": true, 00:15:16.626 "num_base_bdevs": 3, 00:15:16.626 "num_base_bdevs_discovered": 3, 00:15:16.626 "num_base_bdevs_operational": 3, 00:15:16.626 "base_bdevs_list": [ 00:15:16.626 { 00:15:16.626 "name": "pt1", 00:15:16.626 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:16.626 "is_configured": true, 00:15:16.626 "data_offset": 2048, 00:15:16.626 "data_size": 63488 00:15:16.626 }, 00:15:16.626 { 00:15:16.626 "name": "pt2", 00:15:16.626 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:16.626 "is_configured": true, 00:15:16.626 "data_offset": 2048, 00:15:16.626 "data_size": 63488 00:15:16.626 }, 00:15:16.626 { 00:15:16.626 "name": "pt3", 00:15:16.626 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:16.626 "is_configured": true, 00:15:16.626 "data_offset": 2048, 00:15:16.626 "data_size": 63488 00:15:16.626 } 00:15:16.626 ] 00:15:16.626 } 00:15:16.626 } 00:15:16.626 }' 00:15:16.626 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:16.885 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:16.886 pt2 00:15:16.886 pt3' 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.886 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.886 [2024-11-15 11:00:23.795269] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:17.145 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.145 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a8e82474-c00d-4fdb-b835-16f8d51c3cde 00:15:17.145 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a8e82474-c00d-4fdb-b835-16f8d51c3cde ']' 00:15:17.145 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:17.145 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.145 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.145 [2024-11-15 11:00:23.850971] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:17.145 [2024-11-15 11:00:23.851047] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:17.145 [2024-11-15 11:00:23.851151] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:17.145 [2024-11-15 11:00:23.851242] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:17.145 [2024-11-15 11:00:23.851290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.146 11:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.146 [2024-11-15 11:00:23.994751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:17.146 [2024-11-15 11:00:23.996783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:17.146 [2024-11-15 11:00:23.996883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:17.146 [2024-11-15 11:00:23.996955] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:17.146 [2024-11-15 11:00:23.997045] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:17.146 [2024-11-15 11:00:23.997090] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:17.146 [2024-11-15 11:00:23.997126] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:17.146 [2024-11-15 11:00:23.997137] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:17.146 request: 00:15:17.146 { 00:15:17.146 "name": "raid_bdev1", 00:15:17.146 "raid_level": "raid5f", 00:15:17.146 "base_bdevs": [ 00:15:17.146 "malloc1", 00:15:17.146 "malloc2", 00:15:17.146 "malloc3" 00:15:17.146 ], 00:15:17.146 "strip_size_kb": 64, 00:15:17.146 "superblock": false, 00:15:17.146 "method": "bdev_raid_create", 00:15:17.146 "req_id": 1 00:15:17.146 } 00:15:17.146 Got JSON-RPC error response 00:15:17.146 response: 00:15:17.146 { 00:15:17.146 "code": -17, 00:15:17.146 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:17.146 } 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.146 [2024-11-15 11:00:24.050618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:17.146 [2024-11-15 11:00:24.050723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.146 [2024-11-15 11:00:24.050759] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:17.146 [2024-11-15 11:00:24.050786] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.146 [2024-11-15 11:00:24.053112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.146 [2024-11-15 11:00:24.053186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:17.146 [2024-11-15 11:00:24.053306] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:17.146 [2024-11-15 11:00:24.053409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:17.146 pt1 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.146 11:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.406 11:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.406 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.406 "name": "raid_bdev1", 00:15:17.406 "uuid": "a8e82474-c00d-4fdb-b835-16f8d51c3cde", 00:15:17.406 "strip_size_kb": 64, 00:15:17.406 "state": "configuring", 00:15:17.406 "raid_level": "raid5f", 00:15:17.406 "superblock": true, 00:15:17.406 "num_base_bdevs": 3, 00:15:17.406 "num_base_bdevs_discovered": 1, 00:15:17.406 "num_base_bdevs_operational": 3, 00:15:17.406 "base_bdevs_list": [ 00:15:17.406 { 00:15:17.406 "name": "pt1", 00:15:17.406 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:17.406 "is_configured": true, 00:15:17.406 "data_offset": 2048, 00:15:17.406 "data_size": 63488 00:15:17.406 }, 00:15:17.406 { 00:15:17.406 "name": null, 00:15:17.406 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:17.406 "is_configured": false, 00:15:17.406 "data_offset": 2048, 00:15:17.406 "data_size": 63488 00:15:17.406 }, 00:15:17.406 { 00:15:17.406 "name": null, 00:15:17.406 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:17.406 "is_configured": false, 00:15:17.406 "data_offset": 2048, 00:15:17.406 "data_size": 63488 00:15:17.406 } 00:15:17.406 ] 00:15:17.406 }' 00:15:17.406 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.406 11:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.666 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:17.666 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:17.666 11:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.666 11:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.667 [2024-11-15 11:00:24.549796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:17.667 [2024-11-15 11:00:24.549931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.667 [2024-11-15 11:00:24.549958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:17.667 [2024-11-15 11:00:24.549967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.667 [2024-11-15 11:00:24.550446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.667 [2024-11-15 11:00:24.550473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:17.667 [2024-11-15 11:00:24.550566] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:17.667 [2024-11-15 11:00:24.550588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:17.667 pt2 00:15:17.667 11:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.667 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:17.667 11:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.667 11:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.667 [2024-11-15 11:00:24.561765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:17.667 11:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.667 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:17.667 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.667 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:17.667 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.667 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.667 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.667 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.667 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.667 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.667 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.667 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.667 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.667 11:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.667 11:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.932 11:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.932 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.932 "name": "raid_bdev1", 00:15:17.932 "uuid": "a8e82474-c00d-4fdb-b835-16f8d51c3cde", 00:15:17.932 "strip_size_kb": 64, 00:15:17.932 "state": "configuring", 00:15:17.932 "raid_level": "raid5f", 00:15:17.932 "superblock": true, 00:15:17.932 "num_base_bdevs": 3, 00:15:17.932 "num_base_bdevs_discovered": 1, 00:15:17.932 "num_base_bdevs_operational": 3, 00:15:17.932 "base_bdevs_list": [ 00:15:17.932 { 00:15:17.932 "name": "pt1", 00:15:17.932 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:17.932 "is_configured": true, 00:15:17.932 "data_offset": 2048, 00:15:17.932 "data_size": 63488 00:15:17.932 }, 00:15:17.932 { 00:15:17.932 "name": null, 00:15:17.932 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:17.932 "is_configured": false, 00:15:17.932 "data_offset": 0, 00:15:17.932 "data_size": 63488 00:15:17.932 }, 00:15:17.932 { 00:15:17.932 "name": null, 00:15:17.932 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:17.932 "is_configured": false, 00:15:17.932 "data_offset": 2048, 00:15:17.932 "data_size": 63488 00:15:17.932 } 00:15:17.932 ] 00:15:17.932 }' 00:15:17.932 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.932 11:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.194 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:18.194 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:18.194 11:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:18.194 11:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.194 11:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.194 [2024-11-15 11:00:25.004987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:18.194 [2024-11-15 11:00:25.005141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.194 [2024-11-15 11:00:25.005180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:18.194 [2024-11-15 11:00:25.005218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.194 [2024-11-15 11:00:25.005744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.194 [2024-11-15 11:00:25.005812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:18.194 [2024-11-15 11:00:25.005930] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:18.194 [2024-11-15 11:00:25.005990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:18.194 pt2 00:15:18.194 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.194 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:18.194 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:18.194 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:18.194 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.194 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.194 [2024-11-15 11:00:25.016941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:18.195 [2024-11-15 11:00:25.016992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.195 [2024-11-15 11:00:25.017006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:18.195 [2024-11-15 11:00:25.017015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.195 [2024-11-15 11:00:25.017426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.195 [2024-11-15 11:00:25.017448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:18.195 [2024-11-15 11:00:25.017526] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:18.195 [2024-11-15 11:00:25.017557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:18.195 [2024-11-15 11:00:25.017677] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:18.195 [2024-11-15 11:00:25.017694] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:18.195 [2024-11-15 11:00:25.017957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:18.195 [2024-11-15 11:00:25.023736] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:18.195 [2024-11-15 11:00:25.023797] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:18.195 [2024-11-15 11:00:25.023996] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.195 pt3 00:15:18.195 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.195 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:18.195 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:18.195 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:18.195 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.195 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.195 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.195 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.195 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.195 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.195 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.195 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.195 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.195 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.195 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.195 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.195 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.195 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.195 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.195 "name": "raid_bdev1", 00:15:18.195 "uuid": "a8e82474-c00d-4fdb-b835-16f8d51c3cde", 00:15:18.195 "strip_size_kb": 64, 00:15:18.195 "state": "online", 00:15:18.195 "raid_level": "raid5f", 00:15:18.195 "superblock": true, 00:15:18.195 "num_base_bdevs": 3, 00:15:18.195 "num_base_bdevs_discovered": 3, 00:15:18.195 "num_base_bdevs_operational": 3, 00:15:18.195 "base_bdevs_list": [ 00:15:18.195 { 00:15:18.195 "name": "pt1", 00:15:18.195 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:18.195 "is_configured": true, 00:15:18.195 "data_offset": 2048, 00:15:18.195 "data_size": 63488 00:15:18.195 }, 00:15:18.195 { 00:15:18.195 "name": "pt2", 00:15:18.195 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:18.195 "is_configured": true, 00:15:18.195 "data_offset": 2048, 00:15:18.195 "data_size": 63488 00:15:18.195 }, 00:15:18.195 { 00:15:18.195 "name": "pt3", 00:15:18.195 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:18.195 "is_configured": true, 00:15:18.195 "data_offset": 2048, 00:15:18.195 "data_size": 63488 00:15:18.195 } 00:15:18.195 ] 00:15:18.195 }' 00:15:18.195 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.195 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.763 [2024-11-15 11:00:25.529913] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:18.763 "name": "raid_bdev1", 00:15:18.763 "aliases": [ 00:15:18.763 "a8e82474-c00d-4fdb-b835-16f8d51c3cde" 00:15:18.763 ], 00:15:18.763 "product_name": "Raid Volume", 00:15:18.763 "block_size": 512, 00:15:18.763 "num_blocks": 126976, 00:15:18.763 "uuid": "a8e82474-c00d-4fdb-b835-16f8d51c3cde", 00:15:18.763 "assigned_rate_limits": { 00:15:18.763 "rw_ios_per_sec": 0, 00:15:18.763 "rw_mbytes_per_sec": 0, 00:15:18.763 "r_mbytes_per_sec": 0, 00:15:18.763 "w_mbytes_per_sec": 0 00:15:18.763 }, 00:15:18.763 "claimed": false, 00:15:18.763 "zoned": false, 00:15:18.763 "supported_io_types": { 00:15:18.763 "read": true, 00:15:18.763 "write": true, 00:15:18.763 "unmap": false, 00:15:18.763 "flush": false, 00:15:18.763 "reset": true, 00:15:18.763 "nvme_admin": false, 00:15:18.763 "nvme_io": false, 00:15:18.763 "nvme_io_md": false, 00:15:18.763 "write_zeroes": true, 00:15:18.763 "zcopy": false, 00:15:18.763 "get_zone_info": false, 00:15:18.763 "zone_management": false, 00:15:18.763 "zone_append": false, 00:15:18.763 "compare": false, 00:15:18.763 "compare_and_write": false, 00:15:18.763 "abort": false, 00:15:18.763 "seek_hole": false, 00:15:18.763 "seek_data": false, 00:15:18.763 "copy": false, 00:15:18.763 "nvme_iov_md": false 00:15:18.763 }, 00:15:18.763 "driver_specific": { 00:15:18.763 "raid": { 00:15:18.763 "uuid": "a8e82474-c00d-4fdb-b835-16f8d51c3cde", 00:15:18.763 "strip_size_kb": 64, 00:15:18.763 "state": "online", 00:15:18.763 "raid_level": "raid5f", 00:15:18.763 "superblock": true, 00:15:18.763 "num_base_bdevs": 3, 00:15:18.763 "num_base_bdevs_discovered": 3, 00:15:18.763 "num_base_bdevs_operational": 3, 00:15:18.763 "base_bdevs_list": [ 00:15:18.763 { 00:15:18.763 "name": "pt1", 00:15:18.763 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:18.763 "is_configured": true, 00:15:18.763 "data_offset": 2048, 00:15:18.763 "data_size": 63488 00:15:18.763 }, 00:15:18.763 { 00:15:18.763 "name": "pt2", 00:15:18.763 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:18.763 "is_configured": true, 00:15:18.763 "data_offset": 2048, 00:15:18.763 "data_size": 63488 00:15:18.763 }, 00:15:18.763 { 00:15:18.763 "name": "pt3", 00:15:18.763 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:18.763 "is_configured": true, 00:15:18.763 "data_offset": 2048, 00:15:18.763 "data_size": 63488 00:15:18.763 } 00:15:18.763 ] 00:15:18.763 } 00:15:18.763 } 00:15:18.763 }' 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:18.763 pt2 00:15:18.763 pt3' 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.763 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.022 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:19.022 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:19.022 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:19.022 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.022 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:19.022 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.022 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.022 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.022 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:19.022 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:19.022 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:19.022 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:19.022 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.022 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.022 [2024-11-15 11:00:25.761524] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:19.022 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.023 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a8e82474-c00d-4fdb-b835-16f8d51c3cde '!=' a8e82474-c00d-4fdb-b835-16f8d51c3cde ']' 00:15:19.023 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:19.023 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:19.023 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:19.023 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:19.023 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.023 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.023 [2024-11-15 11:00:25.801297] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:19.023 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.023 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:19.023 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.023 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.023 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.023 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.023 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:19.023 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.023 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.023 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.023 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.023 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.023 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.023 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.023 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.023 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.023 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.023 "name": "raid_bdev1", 00:15:19.023 "uuid": "a8e82474-c00d-4fdb-b835-16f8d51c3cde", 00:15:19.023 "strip_size_kb": 64, 00:15:19.023 "state": "online", 00:15:19.023 "raid_level": "raid5f", 00:15:19.023 "superblock": true, 00:15:19.023 "num_base_bdevs": 3, 00:15:19.023 "num_base_bdevs_discovered": 2, 00:15:19.023 "num_base_bdevs_operational": 2, 00:15:19.023 "base_bdevs_list": [ 00:15:19.023 { 00:15:19.023 "name": null, 00:15:19.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.023 "is_configured": false, 00:15:19.023 "data_offset": 0, 00:15:19.023 "data_size": 63488 00:15:19.023 }, 00:15:19.023 { 00:15:19.023 "name": "pt2", 00:15:19.023 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:19.023 "is_configured": true, 00:15:19.023 "data_offset": 2048, 00:15:19.023 "data_size": 63488 00:15:19.023 }, 00:15:19.023 { 00:15:19.023 "name": "pt3", 00:15:19.023 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:19.023 "is_configured": true, 00:15:19.023 "data_offset": 2048, 00:15:19.023 "data_size": 63488 00:15:19.023 } 00:15:19.023 ] 00:15:19.023 }' 00:15:19.023 11:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.023 11:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.589 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:19.589 11:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.590 [2024-11-15 11:00:26.236529] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:19.590 [2024-11-15 11:00:26.236620] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:19.590 [2024-11-15 11:00:26.236726] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:19.590 [2024-11-15 11:00:26.236815] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:19.590 [2024-11-15 11:00:26.236866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.590 [2024-11-15 11:00:26.324335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:19.590 [2024-11-15 11:00:26.324388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.590 [2024-11-15 11:00:26.324404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:19.590 [2024-11-15 11:00:26.324414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.590 [2024-11-15 11:00:26.326726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.590 [2024-11-15 11:00:26.326809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:19.590 [2024-11-15 11:00:26.326892] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:19.590 [2024-11-15 11:00:26.326944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:19.590 pt2 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.590 "name": "raid_bdev1", 00:15:19.590 "uuid": "a8e82474-c00d-4fdb-b835-16f8d51c3cde", 00:15:19.590 "strip_size_kb": 64, 00:15:19.590 "state": "configuring", 00:15:19.590 "raid_level": "raid5f", 00:15:19.590 "superblock": true, 00:15:19.590 "num_base_bdevs": 3, 00:15:19.590 "num_base_bdevs_discovered": 1, 00:15:19.590 "num_base_bdevs_operational": 2, 00:15:19.590 "base_bdevs_list": [ 00:15:19.590 { 00:15:19.590 "name": null, 00:15:19.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.590 "is_configured": false, 00:15:19.590 "data_offset": 2048, 00:15:19.590 "data_size": 63488 00:15:19.590 }, 00:15:19.590 { 00:15:19.590 "name": "pt2", 00:15:19.590 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:19.590 "is_configured": true, 00:15:19.590 "data_offset": 2048, 00:15:19.590 "data_size": 63488 00:15:19.590 }, 00:15:19.590 { 00:15:19.590 "name": null, 00:15:19.590 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:19.590 "is_configured": false, 00:15:19.590 "data_offset": 2048, 00:15:19.590 "data_size": 63488 00:15:19.590 } 00:15:19.590 ] 00:15:19.590 }' 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.590 11:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.159 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:20.159 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:20.159 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:20.159 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:20.159 11:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.159 11:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.159 [2024-11-15 11:00:26.791546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:20.159 [2024-11-15 11:00:26.791674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.159 [2024-11-15 11:00:26.791715] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:20.159 [2024-11-15 11:00:26.791745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.159 [2024-11-15 11:00:26.792230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.159 [2024-11-15 11:00:26.792297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:20.159 [2024-11-15 11:00:26.792422] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:20.159 [2024-11-15 11:00:26.792521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:20.159 [2024-11-15 11:00:26.792696] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:20.159 [2024-11-15 11:00:26.792740] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:20.159 [2024-11-15 11:00:26.793025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:20.159 [2024-11-15 11:00:26.799067] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:20.159 [2024-11-15 11:00:26.799089] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:20.159 [2024-11-15 11:00:26.799517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.159 pt3 00:15:20.159 11:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.159 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:20.159 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.159 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.159 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.159 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.159 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:20.159 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.159 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.159 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.159 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.159 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.159 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.159 11:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.159 11:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.159 11:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.159 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.159 "name": "raid_bdev1", 00:15:20.159 "uuid": "a8e82474-c00d-4fdb-b835-16f8d51c3cde", 00:15:20.159 "strip_size_kb": 64, 00:15:20.159 "state": "online", 00:15:20.159 "raid_level": "raid5f", 00:15:20.159 "superblock": true, 00:15:20.159 "num_base_bdevs": 3, 00:15:20.159 "num_base_bdevs_discovered": 2, 00:15:20.159 "num_base_bdevs_operational": 2, 00:15:20.159 "base_bdevs_list": [ 00:15:20.159 { 00:15:20.159 "name": null, 00:15:20.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.159 "is_configured": false, 00:15:20.159 "data_offset": 2048, 00:15:20.159 "data_size": 63488 00:15:20.159 }, 00:15:20.159 { 00:15:20.159 "name": "pt2", 00:15:20.159 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:20.159 "is_configured": true, 00:15:20.159 "data_offset": 2048, 00:15:20.159 "data_size": 63488 00:15:20.159 }, 00:15:20.159 { 00:15:20.159 "name": "pt3", 00:15:20.159 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:20.159 "is_configured": true, 00:15:20.159 "data_offset": 2048, 00:15:20.159 "data_size": 63488 00:15:20.159 } 00:15:20.159 ] 00:15:20.159 }' 00:15:20.159 11:00:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.159 11:00:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.420 [2024-11-15 11:00:27.230682] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:20.420 [2024-11-15 11:00:27.230808] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:20.420 [2024-11-15 11:00:27.230940] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:20.420 [2024-11-15 11:00:27.231044] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:20.420 [2024-11-15 11:00:27.231104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.420 [2024-11-15 11:00:27.290600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:20.420 [2024-11-15 11:00:27.290699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.420 [2024-11-15 11:00:27.290723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:20.420 [2024-11-15 11:00:27.290733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.420 [2024-11-15 11:00:27.293149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.420 [2024-11-15 11:00:27.293190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:20.420 [2024-11-15 11:00:27.293270] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:20.420 [2024-11-15 11:00:27.293333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:20.420 [2024-11-15 11:00:27.293471] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:20.420 [2024-11-15 11:00:27.293481] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:20.420 [2024-11-15 11:00:27.293497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:20.420 [2024-11-15 11:00:27.293559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:20.420 pt1 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.420 "name": "raid_bdev1", 00:15:20.420 "uuid": "a8e82474-c00d-4fdb-b835-16f8d51c3cde", 00:15:20.420 "strip_size_kb": 64, 00:15:20.420 "state": "configuring", 00:15:20.420 "raid_level": "raid5f", 00:15:20.420 "superblock": true, 00:15:20.420 "num_base_bdevs": 3, 00:15:20.420 "num_base_bdevs_discovered": 1, 00:15:20.420 "num_base_bdevs_operational": 2, 00:15:20.420 "base_bdevs_list": [ 00:15:20.420 { 00:15:20.420 "name": null, 00:15:20.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.420 "is_configured": false, 00:15:20.420 "data_offset": 2048, 00:15:20.420 "data_size": 63488 00:15:20.420 }, 00:15:20.420 { 00:15:20.420 "name": "pt2", 00:15:20.420 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:20.420 "is_configured": true, 00:15:20.420 "data_offset": 2048, 00:15:20.420 "data_size": 63488 00:15:20.420 }, 00:15:20.420 { 00:15:20.420 "name": null, 00:15:20.420 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:20.420 "is_configured": false, 00:15:20.420 "data_offset": 2048, 00:15:20.420 "data_size": 63488 00:15:20.420 } 00:15:20.420 ] 00:15:20.420 }' 00:15:20.420 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.685 11:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.944 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:20.944 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:20.944 11:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.944 11:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.944 11:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.944 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:20.944 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:20.944 11:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.944 11:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.944 [2024-11-15 11:00:27.801755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:20.944 [2024-11-15 11:00:27.801881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.944 [2024-11-15 11:00:27.801924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:20.944 [2024-11-15 11:00:27.801959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.944 [2024-11-15 11:00:27.802565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.944 [2024-11-15 11:00:27.802637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:20.944 [2024-11-15 11:00:27.802773] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:20.944 [2024-11-15 11:00:27.802834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:20.944 [2024-11-15 11:00:27.803011] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:20.944 [2024-11-15 11:00:27.803057] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:20.944 [2024-11-15 11:00:27.803383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:20.944 [2024-11-15 11:00:27.810153] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:20.944 [2024-11-15 11:00:27.810223] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:20.944 [2024-11-15 11:00:27.810568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.944 pt3 00:15:20.944 11:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.944 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:20.944 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.944 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.944 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.944 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.944 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:20.944 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.944 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.944 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.944 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.944 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.944 11:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.944 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.944 11:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.944 11:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.944 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.944 "name": "raid_bdev1", 00:15:20.944 "uuid": "a8e82474-c00d-4fdb-b835-16f8d51c3cde", 00:15:20.944 "strip_size_kb": 64, 00:15:20.944 "state": "online", 00:15:20.944 "raid_level": "raid5f", 00:15:20.944 "superblock": true, 00:15:20.944 "num_base_bdevs": 3, 00:15:20.944 "num_base_bdevs_discovered": 2, 00:15:20.944 "num_base_bdevs_operational": 2, 00:15:20.944 "base_bdevs_list": [ 00:15:20.944 { 00:15:20.944 "name": null, 00:15:20.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.944 "is_configured": false, 00:15:20.944 "data_offset": 2048, 00:15:20.944 "data_size": 63488 00:15:20.944 }, 00:15:20.944 { 00:15:20.944 "name": "pt2", 00:15:20.944 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:20.944 "is_configured": true, 00:15:20.944 "data_offset": 2048, 00:15:20.944 "data_size": 63488 00:15:20.944 }, 00:15:20.944 { 00:15:20.944 "name": "pt3", 00:15:20.944 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:20.944 "is_configured": true, 00:15:20.944 "data_offset": 2048, 00:15:20.944 "data_size": 63488 00:15:20.944 } 00:15:20.944 ] 00:15:20.944 }' 00:15:20.944 11:00:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.944 11:00:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.513 11:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:21.513 11:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:21.513 11:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.513 11:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.513 11:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.513 11:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:21.513 11:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:21.513 11:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.513 11:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.513 11:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:21.513 [2024-11-15 11:00:28.301821] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:21.513 11:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.513 11:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a8e82474-c00d-4fdb-b835-16f8d51c3cde '!=' a8e82474-c00d-4fdb-b835-16f8d51c3cde ']' 00:15:21.513 11:00:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81303 00:15:21.513 11:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 81303 ']' 00:15:21.513 11:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 81303 00:15:21.513 11:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:15:21.513 11:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:21.513 11:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81303 00:15:21.513 11:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:21.513 killing process with pid 81303 00:15:21.513 11:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:21.513 11:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81303' 00:15:21.513 11:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 81303 00:15:21.513 [2024-11-15 11:00:28.366446] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:21.513 [2024-11-15 11:00:28.366557] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.513 11:00:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 81303 00:15:21.513 [2024-11-15 11:00:28.366624] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:21.513 [2024-11-15 11:00:28.366637] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:21.773 [2024-11-15 11:00:28.680145] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:23.150 11:00:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:23.150 00:15:23.150 real 0m7.839s 00:15:23.150 user 0m12.291s 00:15:23.150 sys 0m1.399s 00:15:23.150 11:00:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:23.150 11:00:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.150 ************************************ 00:15:23.150 END TEST raid5f_superblock_test 00:15:23.150 ************************************ 00:15:23.150 11:00:29 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:23.150 11:00:29 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:23.150 11:00:29 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:23.150 11:00:29 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:23.150 11:00:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:23.150 ************************************ 00:15:23.150 START TEST raid5f_rebuild_test 00:15:23.150 ************************************ 00:15:23.150 11:00:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 false false true 00:15:23.150 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:23.150 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:23.150 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81747 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81747 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 81747 ']' 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:23.151 11:00:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.151 [2024-11-15 11:00:29.963277] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:15:23.151 [2024-11-15 11:00:29.963486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:23.151 Zero copy mechanism will not be used. 00:15:23.151 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81747 ] 00:15:23.410 [2024-11-15 11:00:30.136608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.410 [2024-11-15 11:00:30.249709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.668 [2024-11-15 11:00:30.443421] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.668 [2024-11-15 11:00:30.443577] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.926 11:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:23.926 11:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:15:23.926 11:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:23.926 11:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:23.926 11:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.926 11:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.926 BaseBdev1_malloc 00:15:23.926 11:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.926 11:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:23.926 11:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.926 11:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.926 [2024-11-15 11:00:30.847185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:23.926 [2024-11-15 11:00:30.847257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.926 [2024-11-15 11:00:30.847286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:23.926 [2024-11-15 11:00:30.847298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.926 [2024-11-15 11:00:30.849630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.926 [2024-11-15 11:00:30.849720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:24.184 BaseBdev1 00:15:24.184 11:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.184 11:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:24.184 11:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:24.184 11:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.184 11:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.184 BaseBdev2_malloc 00:15:24.184 11:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.184 11:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:24.184 11:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.184 11:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.184 [2024-11-15 11:00:30.903369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:24.184 [2024-11-15 11:00:30.903480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.184 [2024-11-15 11:00:30.903505] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:24.184 [2024-11-15 11:00:30.903518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.184 [2024-11-15 11:00:30.905750] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.184 [2024-11-15 11:00:30.905792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:24.184 BaseBdev2 00:15:24.184 11:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.184 11:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:24.184 11:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:24.184 11:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.184 11:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.184 BaseBdev3_malloc 00:15:24.184 11:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.184 11:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:24.184 11:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.184 11:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.184 [2024-11-15 11:00:30.970689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:24.184 [2024-11-15 11:00:30.970746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.184 [2024-11-15 11:00:30.970768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:24.184 [2024-11-15 11:00:30.970779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.184 [2024-11-15 11:00:30.972839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.184 [2024-11-15 11:00:30.972948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:24.184 BaseBdev3 00:15:24.184 11:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.184 11:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:24.184 11:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.184 11:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.184 spare_malloc 00:15:24.184 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.184 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:24.184 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.184 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.184 spare_delay 00:15:24.184 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.184 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:24.184 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.184 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.184 [2024-11-15 11:00:31.036634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:24.184 [2024-11-15 11:00:31.036691] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.184 [2024-11-15 11:00:31.036711] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:24.184 [2024-11-15 11:00:31.036722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.184 [2024-11-15 11:00:31.039005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.184 [2024-11-15 11:00:31.039053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:24.184 spare 00:15:24.184 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.184 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:24.184 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.184 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.184 [2024-11-15 11:00:31.048683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.184 [2024-11-15 11:00:31.050673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:24.184 [2024-11-15 11:00:31.050736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:24.184 [2024-11-15 11:00:31.050827] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:24.184 [2024-11-15 11:00:31.050838] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:24.184 [2024-11-15 11:00:31.051113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:24.184 [2024-11-15 11:00:31.056801] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:24.184 [2024-11-15 11:00:31.056827] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:24.184 [2024-11-15 11:00:31.057063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.184 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.184 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:24.184 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.184 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.184 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.184 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.184 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.184 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.184 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.184 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.184 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.184 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.184 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.184 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.184 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.184 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.443 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.443 "name": "raid_bdev1", 00:15:24.443 "uuid": "a9855cfd-ae3d-457a-83c4-22cefe811676", 00:15:24.443 "strip_size_kb": 64, 00:15:24.443 "state": "online", 00:15:24.443 "raid_level": "raid5f", 00:15:24.443 "superblock": false, 00:15:24.443 "num_base_bdevs": 3, 00:15:24.443 "num_base_bdevs_discovered": 3, 00:15:24.443 "num_base_bdevs_operational": 3, 00:15:24.443 "base_bdevs_list": [ 00:15:24.443 { 00:15:24.443 "name": "BaseBdev1", 00:15:24.443 "uuid": "791d1a89-b48f-5c01-929e-c830dbe089dc", 00:15:24.443 "is_configured": true, 00:15:24.443 "data_offset": 0, 00:15:24.443 "data_size": 65536 00:15:24.443 }, 00:15:24.443 { 00:15:24.443 "name": "BaseBdev2", 00:15:24.443 "uuid": "b5da8ca8-8722-592d-b027-8ca6ec554c77", 00:15:24.443 "is_configured": true, 00:15:24.443 "data_offset": 0, 00:15:24.443 "data_size": 65536 00:15:24.443 }, 00:15:24.443 { 00:15:24.443 "name": "BaseBdev3", 00:15:24.443 "uuid": "99a5a991-6d52-5e08-93b7-b6a534faa139", 00:15:24.443 "is_configured": true, 00:15:24.443 "data_offset": 0, 00:15:24.443 "data_size": 65536 00:15:24.443 } 00:15:24.443 ] 00:15:24.443 }' 00:15:24.443 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.443 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.703 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:24.703 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.703 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.703 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:24.703 [2024-11-15 11:00:31.510948] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:24.703 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.703 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:24.703 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.703 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:24.703 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.703 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.703 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.703 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:24.703 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:24.703 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:24.703 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:24.703 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:24.703 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:24.703 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:24.703 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:24.703 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:24.703 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:24.703 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:24.703 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:24.703 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:24.703 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:24.962 [2024-11-15 11:00:31.806292] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:24.962 /dev/nbd0 00:15:24.962 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:24.962 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:24.962 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:24.962 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:15:24.962 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:24.962 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:24.962 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:24.962 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:15:24.962 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:24.962 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:24.962 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:24.962 1+0 records in 00:15:24.962 1+0 records out 00:15:24.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319811 s, 12.8 MB/s 00:15:24.962 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.962 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:15:24.962 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.962 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:24.962 11:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:15:24.962 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:24.962 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:24.962 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:24.962 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:24.962 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:24.962 11:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:25.530 512+0 records in 00:15:25.530 512+0 records out 00:15:25.530 67108864 bytes (67 MB, 64 MiB) copied, 0.426206 s, 157 MB/s 00:15:25.530 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:25.530 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:25.530 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:25.530 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:25.530 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:25.530 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:25.530 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:25.789 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:25.789 [2024-11-15 11:00:32.542622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.789 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:25.789 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:25.789 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:25.789 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:25.789 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:25.789 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:25.789 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:25.789 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:25.789 11:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.789 11:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.789 [2024-11-15 11:00:32.559148] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:25.789 11:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.789 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:25.789 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.789 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.789 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.789 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.789 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:25.789 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.789 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.789 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.789 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.789 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.789 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.789 11:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.789 11:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.789 11:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.789 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.789 "name": "raid_bdev1", 00:15:25.789 "uuid": "a9855cfd-ae3d-457a-83c4-22cefe811676", 00:15:25.789 "strip_size_kb": 64, 00:15:25.789 "state": "online", 00:15:25.789 "raid_level": "raid5f", 00:15:25.789 "superblock": false, 00:15:25.789 "num_base_bdevs": 3, 00:15:25.789 "num_base_bdevs_discovered": 2, 00:15:25.789 "num_base_bdevs_operational": 2, 00:15:25.789 "base_bdevs_list": [ 00:15:25.789 { 00:15:25.789 "name": null, 00:15:25.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.789 "is_configured": false, 00:15:25.789 "data_offset": 0, 00:15:25.789 "data_size": 65536 00:15:25.789 }, 00:15:25.789 { 00:15:25.789 "name": "BaseBdev2", 00:15:25.789 "uuid": "b5da8ca8-8722-592d-b027-8ca6ec554c77", 00:15:25.789 "is_configured": true, 00:15:25.789 "data_offset": 0, 00:15:25.789 "data_size": 65536 00:15:25.789 }, 00:15:25.789 { 00:15:25.789 "name": "BaseBdev3", 00:15:25.789 "uuid": "99a5a991-6d52-5e08-93b7-b6a534faa139", 00:15:25.789 "is_configured": true, 00:15:25.789 "data_offset": 0, 00:15:25.789 "data_size": 65536 00:15:25.790 } 00:15:25.790 ] 00:15:25.790 }' 00:15:25.790 11:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.790 11:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.361 11:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:26.361 11:00:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.361 11:00:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.361 [2024-11-15 11:00:33.034369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:26.361 [2024-11-15 11:00:33.053060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:26.361 11:00:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.361 11:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:26.361 [2024-11-15 11:00:33.061194] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:27.298 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.298 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.298 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.298 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.298 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.298 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.298 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.298 11:00:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.298 11:00:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.298 11:00:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.298 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.298 "name": "raid_bdev1", 00:15:27.298 "uuid": "a9855cfd-ae3d-457a-83c4-22cefe811676", 00:15:27.298 "strip_size_kb": 64, 00:15:27.298 "state": "online", 00:15:27.298 "raid_level": "raid5f", 00:15:27.298 "superblock": false, 00:15:27.298 "num_base_bdevs": 3, 00:15:27.298 "num_base_bdevs_discovered": 3, 00:15:27.298 "num_base_bdevs_operational": 3, 00:15:27.298 "process": { 00:15:27.298 "type": "rebuild", 00:15:27.298 "target": "spare", 00:15:27.298 "progress": { 00:15:27.298 "blocks": 20480, 00:15:27.298 "percent": 15 00:15:27.298 } 00:15:27.298 }, 00:15:27.298 "base_bdevs_list": [ 00:15:27.298 { 00:15:27.298 "name": "spare", 00:15:27.298 "uuid": "c8db437c-ea71-56eb-90d1-e480e335e437", 00:15:27.298 "is_configured": true, 00:15:27.298 "data_offset": 0, 00:15:27.298 "data_size": 65536 00:15:27.298 }, 00:15:27.298 { 00:15:27.298 "name": "BaseBdev2", 00:15:27.298 "uuid": "b5da8ca8-8722-592d-b027-8ca6ec554c77", 00:15:27.298 "is_configured": true, 00:15:27.298 "data_offset": 0, 00:15:27.298 "data_size": 65536 00:15:27.298 }, 00:15:27.298 { 00:15:27.298 "name": "BaseBdev3", 00:15:27.298 "uuid": "99a5a991-6d52-5e08-93b7-b6a534faa139", 00:15:27.298 "is_configured": true, 00:15:27.298 "data_offset": 0, 00:15:27.298 "data_size": 65536 00:15:27.298 } 00:15:27.298 ] 00:15:27.298 }' 00:15:27.298 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.298 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.298 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.298 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.298 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:27.298 11:00:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.299 11:00:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.299 [2024-11-15 11:00:34.216254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:27.557 [2024-11-15 11:00:34.271423] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:27.557 [2024-11-15 11:00:34.271550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.557 [2024-11-15 11:00:34.271575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:27.557 [2024-11-15 11:00:34.271584] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:27.557 11:00:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.557 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:27.557 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.557 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.557 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.557 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.557 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:27.557 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.557 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.557 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.557 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.557 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.557 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.557 11:00:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.557 11:00:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.557 11:00:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.557 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.557 "name": "raid_bdev1", 00:15:27.557 "uuid": "a9855cfd-ae3d-457a-83c4-22cefe811676", 00:15:27.557 "strip_size_kb": 64, 00:15:27.557 "state": "online", 00:15:27.557 "raid_level": "raid5f", 00:15:27.557 "superblock": false, 00:15:27.557 "num_base_bdevs": 3, 00:15:27.557 "num_base_bdevs_discovered": 2, 00:15:27.557 "num_base_bdevs_operational": 2, 00:15:27.557 "base_bdevs_list": [ 00:15:27.557 { 00:15:27.557 "name": null, 00:15:27.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.557 "is_configured": false, 00:15:27.557 "data_offset": 0, 00:15:27.557 "data_size": 65536 00:15:27.557 }, 00:15:27.557 { 00:15:27.557 "name": "BaseBdev2", 00:15:27.557 "uuid": "b5da8ca8-8722-592d-b027-8ca6ec554c77", 00:15:27.557 "is_configured": true, 00:15:27.557 "data_offset": 0, 00:15:27.557 "data_size": 65536 00:15:27.557 }, 00:15:27.557 { 00:15:27.557 "name": "BaseBdev3", 00:15:27.557 "uuid": "99a5a991-6d52-5e08-93b7-b6a534faa139", 00:15:27.557 "is_configured": true, 00:15:27.557 "data_offset": 0, 00:15:27.557 "data_size": 65536 00:15:27.557 } 00:15:27.557 ] 00:15:27.557 }' 00:15:27.557 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.557 11:00:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.125 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:28.125 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.125 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:28.125 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:28.125 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.125 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.125 11:00:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.125 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.125 11:00:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.125 11:00:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.125 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.125 "name": "raid_bdev1", 00:15:28.125 "uuid": "a9855cfd-ae3d-457a-83c4-22cefe811676", 00:15:28.125 "strip_size_kb": 64, 00:15:28.125 "state": "online", 00:15:28.125 "raid_level": "raid5f", 00:15:28.125 "superblock": false, 00:15:28.125 "num_base_bdevs": 3, 00:15:28.125 "num_base_bdevs_discovered": 2, 00:15:28.125 "num_base_bdevs_operational": 2, 00:15:28.125 "base_bdevs_list": [ 00:15:28.125 { 00:15:28.125 "name": null, 00:15:28.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.125 "is_configured": false, 00:15:28.125 "data_offset": 0, 00:15:28.125 "data_size": 65536 00:15:28.125 }, 00:15:28.125 { 00:15:28.125 "name": "BaseBdev2", 00:15:28.125 "uuid": "b5da8ca8-8722-592d-b027-8ca6ec554c77", 00:15:28.125 "is_configured": true, 00:15:28.125 "data_offset": 0, 00:15:28.125 "data_size": 65536 00:15:28.125 }, 00:15:28.125 { 00:15:28.125 "name": "BaseBdev3", 00:15:28.125 "uuid": "99a5a991-6d52-5e08-93b7-b6a534faa139", 00:15:28.125 "is_configured": true, 00:15:28.125 "data_offset": 0, 00:15:28.125 "data_size": 65536 00:15:28.125 } 00:15:28.125 ] 00:15:28.125 }' 00:15:28.125 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.125 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:28.125 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.125 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:28.125 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:28.125 11:00:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.125 11:00:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.125 [2024-11-15 11:00:34.951371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:28.125 [2024-11-15 11:00:34.970449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:28.125 11:00:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.125 11:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:28.125 [2024-11-15 11:00:34.978799] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:29.060 11:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:29.060 11:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.060 11:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:29.060 11:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:29.060 11:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.060 11:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.060 11:00:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.060 11:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.060 11:00:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.318 11:00:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.318 11:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.318 "name": "raid_bdev1", 00:15:29.318 "uuid": "a9855cfd-ae3d-457a-83c4-22cefe811676", 00:15:29.318 "strip_size_kb": 64, 00:15:29.318 "state": "online", 00:15:29.318 "raid_level": "raid5f", 00:15:29.318 "superblock": false, 00:15:29.318 "num_base_bdevs": 3, 00:15:29.318 "num_base_bdevs_discovered": 3, 00:15:29.318 "num_base_bdevs_operational": 3, 00:15:29.318 "process": { 00:15:29.318 "type": "rebuild", 00:15:29.318 "target": "spare", 00:15:29.318 "progress": { 00:15:29.318 "blocks": 20480, 00:15:29.318 "percent": 15 00:15:29.318 } 00:15:29.318 }, 00:15:29.318 "base_bdevs_list": [ 00:15:29.318 { 00:15:29.318 "name": "spare", 00:15:29.318 "uuid": "c8db437c-ea71-56eb-90d1-e480e335e437", 00:15:29.318 "is_configured": true, 00:15:29.318 "data_offset": 0, 00:15:29.318 "data_size": 65536 00:15:29.318 }, 00:15:29.318 { 00:15:29.318 "name": "BaseBdev2", 00:15:29.318 "uuid": "b5da8ca8-8722-592d-b027-8ca6ec554c77", 00:15:29.318 "is_configured": true, 00:15:29.318 "data_offset": 0, 00:15:29.318 "data_size": 65536 00:15:29.318 }, 00:15:29.318 { 00:15:29.318 "name": "BaseBdev3", 00:15:29.318 "uuid": "99a5a991-6d52-5e08-93b7-b6a534faa139", 00:15:29.318 "is_configured": true, 00:15:29.318 "data_offset": 0, 00:15:29.318 "data_size": 65536 00:15:29.318 } 00:15:29.318 ] 00:15:29.318 }' 00:15:29.318 11:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.318 11:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:29.318 11:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.318 11:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:29.318 11:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:29.318 11:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:29.318 11:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:29.318 11:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=555 00:15:29.318 11:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:29.318 11:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:29.318 11:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.318 11:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:29.318 11:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:29.319 11:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.319 11:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.319 11:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.319 11:00:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.319 11:00:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.319 11:00:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.319 11:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.319 "name": "raid_bdev1", 00:15:29.319 "uuid": "a9855cfd-ae3d-457a-83c4-22cefe811676", 00:15:29.319 "strip_size_kb": 64, 00:15:29.319 "state": "online", 00:15:29.319 "raid_level": "raid5f", 00:15:29.319 "superblock": false, 00:15:29.319 "num_base_bdevs": 3, 00:15:29.319 "num_base_bdevs_discovered": 3, 00:15:29.319 "num_base_bdevs_operational": 3, 00:15:29.319 "process": { 00:15:29.319 "type": "rebuild", 00:15:29.319 "target": "spare", 00:15:29.319 "progress": { 00:15:29.319 "blocks": 22528, 00:15:29.319 "percent": 17 00:15:29.319 } 00:15:29.319 }, 00:15:29.319 "base_bdevs_list": [ 00:15:29.319 { 00:15:29.319 "name": "spare", 00:15:29.319 "uuid": "c8db437c-ea71-56eb-90d1-e480e335e437", 00:15:29.319 "is_configured": true, 00:15:29.319 "data_offset": 0, 00:15:29.319 "data_size": 65536 00:15:29.319 }, 00:15:29.319 { 00:15:29.319 "name": "BaseBdev2", 00:15:29.319 "uuid": "b5da8ca8-8722-592d-b027-8ca6ec554c77", 00:15:29.319 "is_configured": true, 00:15:29.319 "data_offset": 0, 00:15:29.319 "data_size": 65536 00:15:29.319 }, 00:15:29.319 { 00:15:29.319 "name": "BaseBdev3", 00:15:29.319 "uuid": "99a5a991-6d52-5e08-93b7-b6a534faa139", 00:15:29.319 "is_configured": true, 00:15:29.319 "data_offset": 0, 00:15:29.319 "data_size": 65536 00:15:29.319 } 00:15:29.319 ] 00:15:29.319 }' 00:15:29.319 11:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.319 11:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:29.319 11:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.576 11:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:29.576 11:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:30.571 11:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:30.571 11:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.571 11:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.571 11:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.571 11:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.571 11:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.571 11:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.571 11:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.571 11:00:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.571 11:00:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.571 11:00:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.571 11:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.571 "name": "raid_bdev1", 00:15:30.571 "uuid": "a9855cfd-ae3d-457a-83c4-22cefe811676", 00:15:30.571 "strip_size_kb": 64, 00:15:30.572 "state": "online", 00:15:30.572 "raid_level": "raid5f", 00:15:30.572 "superblock": false, 00:15:30.572 "num_base_bdevs": 3, 00:15:30.572 "num_base_bdevs_discovered": 3, 00:15:30.572 "num_base_bdevs_operational": 3, 00:15:30.572 "process": { 00:15:30.572 "type": "rebuild", 00:15:30.572 "target": "spare", 00:15:30.572 "progress": { 00:15:30.572 "blocks": 45056, 00:15:30.572 "percent": 34 00:15:30.572 } 00:15:30.572 }, 00:15:30.572 "base_bdevs_list": [ 00:15:30.572 { 00:15:30.572 "name": "spare", 00:15:30.572 "uuid": "c8db437c-ea71-56eb-90d1-e480e335e437", 00:15:30.572 "is_configured": true, 00:15:30.572 "data_offset": 0, 00:15:30.572 "data_size": 65536 00:15:30.572 }, 00:15:30.572 { 00:15:30.572 "name": "BaseBdev2", 00:15:30.572 "uuid": "b5da8ca8-8722-592d-b027-8ca6ec554c77", 00:15:30.572 "is_configured": true, 00:15:30.572 "data_offset": 0, 00:15:30.572 "data_size": 65536 00:15:30.572 }, 00:15:30.572 { 00:15:30.572 "name": "BaseBdev3", 00:15:30.572 "uuid": "99a5a991-6d52-5e08-93b7-b6a534faa139", 00:15:30.572 "is_configured": true, 00:15:30.572 "data_offset": 0, 00:15:30.572 "data_size": 65536 00:15:30.572 } 00:15:30.572 ] 00:15:30.572 }' 00:15:30.572 11:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.572 11:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.572 11:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.572 11:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.572 11:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:31.508 11:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:31.508 11:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.508 11:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.508 11:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.508 11:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.508 11:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.508 11:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.508 11:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.508 11:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.508 11:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.766 11:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.766 11:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.766 "name": "raid_bdev1", 00:15:31.766 "uuid": "a9855cfd-ae3d-457a-83c4-22cefe811676", 00:15:31.766 "strip_size_kb": 64, 00:15:31.766 "state": "online", 00:15:31.766 "raid_level": "raid5f", 00:15:31.766 "superblock": false, 00:15:31.766 "num_base_bdevs": 3, 00:15:31.766 "num_base_bdevs_discovered": 3, 00:15:31.766 "num_base_bdevs_operational": 3, 00:15:31.766 "process": { 00:15:31.766 "type": "rebuild", 00:15:31.766 "target": "spare", 00:15:31.766 "progress": { 00:15:31.766 "blocks": 69632, 00:15:31.766 "percent": 53 00:15:31.766 } 00:15:31.766 }, 00:15:31.766 "base_bdevs_list": [ 00:15:31.766 { 00:15:31.766 "name": "spare", 00:15:31.766 "uuid": "c8db437c-ea71-56eb-90d1-e480e335e437", 00:15:31.767 "is_configured": true, 00:15:31.767 "data_offset": 0, 00:15:31.767 "data_size": 65536 00:15:31.767 }, 00:15:31.767 { 00:15:31.767 "name": "BaseBdev2", 00:15:31.767 "uuid": "b5da8ca8-8722-592d-b027-8ca6ec554c77", 00:15:31.767 "is_configured": true, 00:15:31.767 "data_offset": 0, 00:15:31.767 "data_size": 65536 00:15:31.767 }, 00:15:31.767 { 00:15:31.767 "name": "BaseBdev3", 00:15:31.767 "uuid": "99a5a991-6d52-5e08-93b7-b6a534faa139", 00:15:31.767 "is_configured": true, 00:15:31.767 "data_offset": 0, 00:15:31.767 "data_size": 65536 00:15:31.767 } 00:15:31.767 ] 00:15:31.767 }' 00:15:31.767 11:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.767 11:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:31.767 11:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.767 11:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:31.767 11:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:32.758 11:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:32.758 11:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.758 11:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.758 11:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.758 11:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.758 11:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.758 11:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.758 11:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.758 11:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.758 11:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.758 11:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.759 11:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.759 "name": "raid_bdev1", 00:15:32.759 "uuid": "a9855cfd-ae3d-457a-83c4-22cefe811676", 00:15:32.759 "strip_size_kb": 64, 00:15:32.759 "state": "online", 00:15:32.759 "raid_level": "raid5f", 00:15:32.759 "superblock": false, 00:15:32.759 "num_base_bdevs": 3, 00:15:32.759 "num_base_bdevs_discovered": 3, 00:15:32.759 "num_base_bdevs_operational": 3, 00:15:32.759 "process": { 00:15:32.759 "type": "rebuild", 00:15:32.759 "target": "spare", 00:15:32.759 "progress": { 00:15:32.759 "blocks": 92160, 00:15:32.759 "percent": 70 00:15:32.759 } 00:15:32.759 }, 00:15:32.759 "base_bdevs_list": [ 00:15:32.759 { 00:15:32.759 "name": "spare", 00:15:32.759 "uuid": "c8db437c-ea71-56eb-90d1-e480e335e437", 00:15:32.759 "is_configured": true, 00:15:32.759 "data_offset": 0, 00:15:32.759 "data_size": 65536 00:15:32.759 }, 00:15:32.759 { 00:15:32.759 "name": "BaseBdev2", 00:15:32.759 "uuid": "b5da8ca8-8722-592d-b027-8ca6ec554c77", 00:15:32.759 "is_configured": true, 00:15:32.759 "data_offset": 0, 00:15:32.759 "data_size": 65536 00:15:32.759 }, 00:15:32.759 { 00:15:32.759 "name": "BaseBdev3", 00:15:32.759 "uuid": "99a5a991-6d52-5e08-93b7-b6a534faa139", 00:15:32.759 "is_configured": true, 00:15:32.759 "data_offset": 0, 00:15:32.759 "data_size": 65536 00:15:32.759 } 00:15:32.759 ] 00:15:32.759 }' 00:15:32.759 11:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.759 11:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.759 11:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.017 11:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:33.017 11:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:33.951 11:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:33.951 11:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:33.951 11:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.951 11:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:33.951 11:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:33.951 11:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.951 11:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.951 11:00:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.951 11:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.951 11:00:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.951 11:00:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.951 11:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.951 "name": "raid_bdev1", 00:15:33.951 "uuid": "a9855cfd-ae3d-457a-83c4-22cefe811676", 00:15:33.951 "strip_size_kb": 64, 00:15:33.951 "state": "online", 00:15:33.951 "raid_level": "raid5f", 00:15:33.951 "superblock": false, 00:15:33.951 "num_base_bdevs": 3, 00:15:33.951 "num_base_bdevs_discovered": 3, 00:15:33.951 "num_base_bdevs_operational": 3, 00:15:33.951 "process": { 00:15:33.952 "type": "rebuild", 00:15:33.952 "target": "spare", 00:15:33.952 "progress": { 00:15:33.952 "blocks": 116736, 00:15:33.952 "percent": 89 00:15:33.952 } 00:15:33.952 }, 00:15:33.952 "base_bdevs_list": [ 00:15:33.952 { 00:15:33.952 "name": "spare", 00:15:33.952 "uuid": "c8db437c-ea71-56eb-90d1-e480e335e437", 00:15:33.952 "is_configured": true, 00:15:33.952 "data_offset": 0, 00:15:33.952 "data_size": 65536 00:15:33.952 }, 00:15:33.952 { 00:15:33.952 "name": "BaseBdev2", 00:15:33.952 "uuid": "b5da8ca8-8722-592d-b027-8ca6ec554c77", 00:15:33.952 "is_configured": true, 00:15:33.952 "data_offset": 0, 00:15:33.952 "data_size": 65536 00:15:33.952 }, 00:15:33.952 { 00:15:33.952 "name": "BaseBdev3", 00:15:33.952 "uuid": "99a5a991-6d52-5e08-93b7-b6a534faa139", 00:15:33.952 "is_configured": true, 00:15:33.952 "data_offset": 0, 00:15:33.952 "data_size": 65536 00:15:33.952 } 00:15:33.952 ] 00:15:33.952 }' 00:15:33.952 11:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.952 11:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:33.952 11:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.210 11:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.210 11:00:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:34.778 [2024-11-15 11:00:41.434390] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:34.778 [2024-11-15 11:00:41.434501] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:34.778 [2024-11-15 11:00:41.434553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.037 11:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:35.037 11:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:35.037 11:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.037 11:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:35.037 11:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:35.037 11:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.037 11:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.037 11:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.037 11:00:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.037 11:00:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.037 11:00:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.037 11:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.037 "name": "raid_bdev1", 00:15:35.037 "uuid": "a9855cfd-ae3d-457a-83c4-22cefe811676", 00:15:35.037 "strip_size_kb": 64, 00:15:35.037 "state": "online", 00:15:35.037 "raid_level": "raid5f", 00:15:35.037 "superblock": false, 00:15:35.037 "num_base_bdevs": 3, 00:15:35.037 "num_base_bdevs_discovered": 3, 00:15:35.037 "num_base_bdevs_operational": 3, 00:15:35.037 "base_bdevs_list": [ 00:15:35.037 { 00:15:35.037 "name": "spare", 00:15:35.037 "uuid": "c8db437c-ea71-56eb-90d1-e480e335e437", 00:15:35.037 "is_configured": true, 00:15:35.037 "data_offset": 0, 00:15:35.037 "data_size": 65536 00:15:35.037 }, 00:15:35.037 { 00:15:35.037 "name": "BaseBdev2", 00:15:35.037 "uuid": "b5da8ca8-8722-592d-b027-8ca6ec554c77", 00:15:35.037 "is_configured": true, 00:15:35.037 "data_offset": 0, 00:15:35.037 "data_size": 65536 00:15:35.037 }, 00:15:35.037 { 00:15:35.037 "name": "BaseBdev3", 00:15:35.037 "uuid": "99a5a991-6d52-5e08-93b7-b6a534faa139", 00:15:35.037 "is_configured": true, 00:15:35.037 "data_offset": 0, 00:15:35.037 "data_size": 65536 00:15:35.037 } 00:15:35.037 ] 00:15:35.037 }' 00:15:35.037 11:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.296 11:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.296 "name": "raid_bdev1", 00:15:35.296 "uuid": "a9855cfd-ae3d-457a-83c4-22cefe811676", 00:15:35.296 "strip_size_kb": 64, 00:15:35.296 "state": "online", 00:15:35.296 "raid_level": "raid5f", 00:15:35.296 "superblock": false, 00:15:35.296 "num_base_bdevs": 3, 00:15:35.296 "num_base_bdevs_discovered": 3, 00:15:35.296 "num_base_bdevs_operational": 3, 00:15:35.296 "base_bdevs_list": [ 00:15:35.296 { 00:15:35.296 "name": "spare", 00:15:35.296 "uuid": "c8db437c-ea71-56eb-90d1-e480e335e437", 00:15:35.296 "is_configured": true, 00:15:35.296 "data_offset": 0, 00:15:35.296 "data_size": 65536 00:15:35.296 }, 00:15:35.296 { 00:15:35.296 "name": "BaseBdev2", 00:15:35.296 "uuid": "b5da8ca8-8722-592d-b027-8ca6ec554c77", 00:15:35.296 "is_configured": true, 00:15:35.296 "data_offset": 0, 00:15:35.296 "data_size": 65536 00:15:35.296 }, 00:15:35.296 { 00:15:35.296 "name": "BaseBdev3", 00:15:35.296 "uuid": "99a5a991-6d52-5e08-93b7-b6a534faa139", 00:15:35.296 "is_configured": true, 00:15:35.296 "data_offset": 0, 00:15:35.296 "data_size": 65536 00:15:35.296 } 00:15:35.296 ] 00:15:35.296 }' 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.296 11:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.555 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.555 "name": "raid_bdev1", 00:15:35.555 "uuid": "a9855cfd-ae3d-457a-83c4-22cefe811676", 00:15:35.555 "strip_size_kb": 64, 00:15:35.555 "state": "online", 00:15:35.555 "raid_level": "raid5f", 00:15:35.555 "superblock": false, 00:15:35.555 "num_base_bdevs": 3, 00:15:35.555 "num_base_bdevs_discovered": 3, 00:15:35.555 "num_base_bdevs_operational": 3, 00:15:35.555 "base_bdevs_list": [ 00:15:35.555 { 00:15:35.555 "name": "spare", 00:15:35.555 "uuid": "c8db437c-ea71-56eb-90d1-e480e335e437", 00:15:35.555 "is_configured": true, 00:15:35.555 "data_offset": 0, 00:15:35.555 "data_size": 65536 00:15:35.555 }, 00:15:35.555 { 00:15:35.555 "name": "BaseBdev2", 00:15:35.555 "uuid": "b5da8ca8-8722-592d-b027-8ca6ec554c77", 00:15:35.555 "is_configured": true, 00:15:35.555 "data_offset": 0, 00:15:35.555 "data_size": 65536 00:15:35.555 }, 00:15:35.555 { 00:15:35.555 "name": "BaseBdev3", 00:15:35.555 "uuid": "99a5a991-6d52-5e08-93b7-b6a534faa139", 00:15:35.555 "is_configured": true, 00:15:35.555 "data_offset": 0, 00:15:35.555 "data_size": 65536 00:15:35.555 } 00:15:35.555 ] 00:15:35.555 }' 00:15:35.555 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.555 11:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.813 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:35.813 11:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.813 11:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.813 [2024-11-15 11:00:42.602949] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:35.813 [2024-11-15 11:00:42.602983] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:35.813 [2024-11-15 11:00:42.603078] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.813 [2024-11-15 11:00:42.603164] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:35.813 [2024-11-15 11:00:42.603181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:35.813 11:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.813 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.813 11:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.813 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:35.813 11:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.813 11:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.813 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:35.813 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:35.813 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:35.813 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:35.813 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:35.813 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:35.813 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:35.813 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:35.813 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:35.813 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:35.813 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:35.813 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:35.813 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:36.070 /dev/nbd0 00:15:36.070 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:36.070 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:36.070 11:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:36.070 11:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:15:36.070 11:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:36.070 11:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:36.070 11:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:36.070 11:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:15:36.070 11:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:36.070 11:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:36.070 11:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:36.070 1+0 records in 00:15:36.070 1+0 records out 00:15:36.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344879 s, 11.9 MB/s 00:15:36.070 11:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.070 11:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:15:36.070 11:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.070 11:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:36.070 11:00:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:15:36.070 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:36.070 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:36.070 11:00:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:36.326 /dev/nbd1 00:15:36.326 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:36.326 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:36.326 11:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:36.326 11:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:15:36.326 11:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:36.326 11:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:36.326 11:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:36.326 11:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:15:36.326 11:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:36.326 11:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:36.326 11:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:36.326 1+0 records in 00:15:36.326 1+0 records out 00:15:36.326 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481936 s, 8.5 MB/s 00:15:36.326 11:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.326 11:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:15:36.326 11:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.326 11:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:36.326 11:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:15:36.326 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:36.326 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:36.326 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:36.584 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:36.584 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:36.584 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:36.584 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:36.584 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:36.584 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:36.584 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:36.843 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:36.843 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:36.843 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:36.843 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:36.843 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:36.843 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:36.843 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:36.843 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:36.843 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:36.843 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:37.102 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:37.102 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:37.102 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:37.102 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:37.102 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:37.102 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:37.102 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:37.102 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:37.102 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:37.102 11:00:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81747 00:15:37.102 11:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 81747 ']' 00:15:37.102 11:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 81747 00:15:37.102 11:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:15:37.102 11:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:37.102 11:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81747 00:15:37.102 killing process with pid 81747 00:15:37.102 Received shutdown signal, test time was about 60.000000 seconds 00:15:37.102 00:15:37.102 Latency(us) 00:15:37.102 [2024-11-15T11:00:44.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.102 [2024-11-15T11:00:44.030Z] =================================================================================================================== 00:15:37.102 [2024-11-15T11:00:44.030Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:37.102 11:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:37.102 11:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:37.102 11:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81747' 00:15:37.102 11:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 81747 00:15:37.102 [2024-11-15 11:00:43.874221] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:37.102 11:00:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 81747 00:15:37.669 [2024-11-15 11:00:44.288274] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:38.605 11:00:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:38.605 00:15:38.605 real 0m15.565s 00:15:38.605 user 0m19.199s 00:15:38.605 sys 0m2.076s 00:15:38.605 11:00:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:38.605 11:00:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.605 ************************************ 00:15:38.605 END TEST raid5f_rebuild_test 00:15:38.605 ************************************ 00:15:38.605 11:00:45 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:38.605 11:00:45 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:38.605 11:00:45 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:38.605 11:00:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:38.605 ************************************ 00:15:38.605 START TEST raid5f_rebuild_test_sb 00:15:38.605 ************************************ 00:15:38.605 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 true false true 00:15:38.605 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:38.605 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:38.605 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:38.605 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:38.605 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:38.605 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:38.605 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.605 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:38.605 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:38.606 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.606 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:38.606 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:38.606 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.606 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:38.606 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:38.606 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.606 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:38.606 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:38.606 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:38.606 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:38.606 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:38.606 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:38.606 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:38.606 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:38.606 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:38.606 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:38.606 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:38.606 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:38.606 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:38.606 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82189 00:15:38.606 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82189 00:15:38.606 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:38.606 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 82189 ']' 00:15:38.606 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.606 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:38.606 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.606 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:38.606 11:00:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.865 [2024-11-15 11:00:45.595488] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:15:38.865 [2024-11-15 11:00:45.595713] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:38.865 Zero copy mechanism will not be used. 00:15:38.865 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82189 ] 00:15:38.865 [2024-11-15 11:00:45.769263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.125 [2024-11-15 11:00:45.884402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.383 [2024-11-15 11:00:46.085614] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:39.383 [2024-11-15 11:00:46.085745] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:39.642 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:39.642 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:15:39.642 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:39.642 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:39.642 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.642 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.642 BaseBdev1_malloc 00:15:39.642 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.642 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:39.642 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.642 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.642 [2024-11-15 11:00:46.498178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:39.642 [2024-11-15 11:00:46.498316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.642 [2024-11-15 11:00:46.498348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:39.642 [2024-11-15 11:00:46.498362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.642 [2024-11-15 11:00:46.500512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.642 [2024-11-15 11:00:46.500551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:39.642 BaseBdev1 00:15:39.642 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.642 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:39.642 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:39.642 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.642 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.642 BaseBdev2_malloc 00:15:39.642 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.642 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:39.642 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.642 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.642 [2024-11-15 11:00:46.553907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:39.642 [2024-11-15 11:00:46.553968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.642 [2024-11-15 11:00:46.553987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:39.642 [2024-11-15 11:00:46.553999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.642 [2024-11-15 11:00:46.556117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.642 [2024-11-15 11:00:46.556156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:39.642 BaseBdev2 00:15:39.642 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.642 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:39.642 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:39.642 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.642 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.901 BaseBdev3_malloc 00:15:39.901 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.901 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:39.901 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.901 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.901 [2024-11-15 11:00:46.621445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:39.901 [2024-11-15 11:00:46.621550] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.901 [2024-11-15 11:00:46.621575] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:39.901 [2024-11-15 11:00:46.621587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.901 [2024-11-15 11:00:46.623748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.901 [2024-11-15 11:00:46.623791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:39.901 BaseBdev3 00:15:39.901 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.901 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:39.901 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.901 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.901 spare_malloc 00:15:39.901 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.901 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:39.901 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.901 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.901 spare_delay 00:15:39.901 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.901 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:39.902 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.902 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.902 [2024-11-15 11:00:46.688991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:39.902 [2024-11-15 11:00:46.689047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.902 [2024-11-15 11:00:46.689064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:39.902 [2024-11-15 11:00:46.689075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.902 [2024-11-15 11:00:46.691352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.902 [2024-11-15 11:00:46.691398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:39.902 spare 00:15:39.902 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.902 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:39.902 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.902 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.902 [2024-11-15 11:00:46.701037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.902 [2024-11-15 11:00:46.702881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:39.902 [2024-11-15 11:00:46.702942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:39.902 [2024-11-15 11:00:46.703123] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:39.902 [2024-11-15 11:00:46.703138] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:39.902 [2024-11-15 11:00:46.703415] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:39.902 [2024-11-15 11:00:46.708986] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:39.902 [2024-11-15 11:00:46.709009] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:39.902 [2024-11-15 11:00:46.709196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.902 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.902 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:39.902 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.902 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.902 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.902 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.902 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.902 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.902 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.902 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.902 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.902 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.902 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.902 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.902 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.902 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.902 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.902 "name": "raid_bdev1", 00:15:39.902 "uuid": "66d60025-f721-4dff-96e6-2b6a6f690a1f", 00:15:39.902 "strip_size_kb": 64, 00:15:39.902 "state": "online", 00:15:39.902 "raid_level": "raid5f", 00:15:39.902 "superblock": true, 00:15:39.902 "num_base_bdevs": 3, 00:15:39.902 "num_base_bdevs_discovered": 3, 00:15:39.902 "num_base_bdevs_operational": 3, 00:15:39.902 "base_bdevs_list": [ 00:15:39.902 { 00:15:39.902 "name": "BaseBdev1", 00:15:39.902 "uuid": "d78fe156-650d-5488-a659-86490c0da9e9", 00:15:39.902 "is_configured": true, 00:15:39.902 "data_offset": 2048, 00:15:39.902 "data_size": 63488 00:15:39.902 }, 00:15:39.902 { 00:15:39.902 "name": "BaseBdev2", 00:15:39.902 "uuid": "a1ea2165-ed3f-5f88-8db8-915f3abd0588", 00:15:39.902 "is_configured": true, 00:15:39.902 "data_offset": 2048, 00:15:39.902 "data_size": 63488 00:15:39.902 }, 00:15:39.902 { 00:15:39.902 "name": "BaseBdev3", 00:15:39.902 "uuid": "ab227f26-9213-5f28-839d-ec77afbd9822", 00:15:39.902 "is_configured": true, 00:15:39.902 "data_offset": 2048, 00:15:39.902 "data_size": 63488 00:15:39.902 } 00:15:39.902 ] 00:15:39.902 }' 00:15:39.902 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.902 11:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.471 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:40.471 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:40.471 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.471 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.471 [2024-11-15 11:00:47.167267] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:40.471 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.471 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:40.471 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.471 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.471 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:40.471 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.471 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.471 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:40.471 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:40.471 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:40.471 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:40.471 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:40.471 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:40.471 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:40.471 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:40.471 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:40.471 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:40.471 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:40.471 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:40.471 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:40.471 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:40.729 [2024-11-15 11:00:47.458621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:40.729 /dev/nbd0 00:15:40.729 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:40.729 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:40.729 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:40.729 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:15:40.729 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:40.729 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:40.729 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:40.729 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:15:40.729 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:40.729 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:40.729 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.729 1+0 records in 00:15:40.729 1+0 records out 00:15:40.729 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271578 s, 15.1 MB/s 00:15:40.729 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.729 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:15:40.729 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.729 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:40.729 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:15:40.729 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:40.729 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:40.729 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:40.729 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:40.729 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:40.729 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:40.989 496+0 records in 00:15:40.989 496+0 records out 00:15:40.989 65011712 bytes (65 MB, 62 MiB) copied, 0.324044 s, 201 MB/s 00:15:40.989 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:40.989 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:40.989 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:40.989 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:40.989 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:40.989 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.989 11:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:41.248 [2024-11-15 11:00:48.079460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.248 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:41.248 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:41.248 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:41.248 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:41.248 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:41.248 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:41.248 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:41.248 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:41.248 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:41.248 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.248 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.248 [2024-11-15 11:00:48.100066] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:41.248 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.248 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:41.248 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.248 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.248 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.248 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.248 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:41.248 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.248 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.248 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.248 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.248 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.248 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.248 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.248 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.248 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.248 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.248 "name": "raid_bdev1", 00:15:41.248 "uuid": "66d60025-f721-4dff-96e6-2b6a6f690a1f", 00:15:41.248 "strip_size_kb": 64, 00:15:41.248 "state": "online", 00:15:41.248 "raid_level": "raid5f", 00:15:41.248 "superblock": true, 00:15:41.248 "num_base_bdevs": 3, 00:15:41.248 "num_base_bdevs_discovered": 2, 00:15:41.248 "num_base_bdevs_operational": 2, 00:15:41.248 "base_bdevs_list": [ 00:15:41.248 { 00:15:41.248 "name": null, 00:15:41.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.248 "is_configured": false, 00:15:41.248 "data_offset": 0, 00:15:41.248 "data_size": 63488 00:15:41.248 }, 00:15:41.248 { 00:15:41.248 "name": "BaseBdev2", 00:15:41.248 "uuid": "a1ea2165-ed3f-5f88-8db8-915f3abd0588", 00:15:41.248 "is_configured": true, 00:15:41.249 "data_offset": 2048, 00:15:41.249 "data_size": 63488 00:15:41.249 }, 00:15:41.249 { 00:15:41.249 "name": "BaseBdev3", 00:15:41.249 "uuid": "ab227f26-9213-5f28-839d-ec77afbd9822", 00:15:41.249 "is_configured": true, 00:15:41.249 "data_offset": 2048, 00:15:41.249 "data_size": 63488 00:15:41.249 } 00:15:41.249 ] 00:15:41.249 }' 00:15:41.249 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.249 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.816 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:41.816 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.816 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.816 [2024-11-15 11:00:48.595473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:41.816 [2024-11-15 11:00:48.615352] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:41.816 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.816 11:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:41.816 [2024-11-15 11:00:48.624545] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:42.782 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.782 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.782 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.782 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.782 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.782 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.782 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.782 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.782 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.782 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.782 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.782 "name": "raid_bdev1", 00:15:42.782 "uuid": "66d60025-f721-4dff-96e6-2b6a6f690a1f", 00:15:42.782 "strip_size_kb": 64, 00:15:42.782 "state": "online", 00:15:42.782 "raid_level": "raid5f", 00:15:42.782 "superblock": true, 00:15:42.782 "num_base_bdevs": 3, 00:15:42.782 "num_base_bdevs_discovered": 3, 00:15:42.782 "num_base_bdevs_operational": 3, 00:15:42.782 "process": { 00:15:42.782 "type": "rebuild", 00:15:42.782 "target": "spare", 00:15:42.782 "progress": { 00:15:42.782 "blocks": 20480, 00:15:42.782 "percent": 16 00:15:42.782 } 00:15:42.782 }, 00:15:42.782 "base_bdevs_list": [ 00:15:42.782 { 00:15:42.782 "name": "spare", 00:15:42.782 "uuid": "a6f691bc-812e-53bc-992e-ca5399d5f645", 00:15:42.782 "is_configured": true, 00:15:42.782 "data_offset": 2048, 00:15:42.782 "data_size": 63488 00:15:42.782 }, 00:15:42.782 { 00:15:42.782 "name": "BaseBdev2", 00:15:42.782 "uuid": "a1ea2165-ed3f-5f88-8db8-915f3abd0588", 00:15:42.782 "is_configured": true, 00:15:42.782 "data_offset": 2048, 00:15:42.782 "data_size": 63488 00:15:42.782 }, 00:15:42.782 { 00:15:42.782 "name": "BaseBdev3", 00:15:42.782 "uuid": "ab227f26-9213-5f28-839d-ec77afbd9822", 00:15:42.782 "is_configured": true, 00:15:42.782 "data_offset": 2048, 00:15:42.782 "data_size": 63488 00:15:42.782 } 00:15:42.782 ] 00:15:42.782 }' 00:15:42.782 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.040 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.040 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.040 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.040 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:43.040 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.040 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.040 [2024-11-15 11:00:49.784742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:43.040 [2024-11-15 11:00:49.835080] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:43.040 [2024-11-15 11:00:49.835245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.040 [2024-11-15 11:00:49.835289] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:43.040 [2024-11-15 11:00:49.835331] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:43.040 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.040 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:43.040 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.040 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.040 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.040 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.040 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:43.040 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.040 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.040 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.040 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.040 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.040 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.040 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.040 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.040 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.040 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.040 "name": "raid_bdev1", 00:15:43.040 "uuid": "66d60025-f721-4dff-96e6-2b6a6f690a1f", 00:15:43.040 "strip_size_kb": 64, 00:15:43.040 "state": "online", 00:15:43.040 "raid_level": "raid5f", 00:15:43.040 "superblock": true, 00:15:43.040 "num_base_bdevs": 3, 00:15:43.040 "num_base_bdevs_discovered": 2, 00:15:43.040 "num_base_bdevs_operational": 2, 00:15:43.040 "base_bdevs_list": [ 00:15:43.040 { 00:15:43.040 "name": null, 00:15:43.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.040 "is_configured": false, 00:15:43.040 "data_offset": 0, 00:15:43.040 "data_size": 63488 00:15:43.040 }, 00:15:43.040 { 00:15:43.040 "name": "BaseBdev2", 00:15:43.040 "uuid": "a1ea2165-ed3f-5f88-8db8-915f3abd0588", 00:15:43.040 "is_configured": true, 00:15:43.040 "data_offset": 2048, 00:15:43.040 "data_size": 63488 00:15:43.040 }, 00:15:43.040 { 00:15:43.040 "name": "BaseBdev3", 00:15:43.040 "uuid": "ab227f26-9213-5f28-839d-ec77afbd9822", 00:15:43.040 "is_configured": true, 00:15:43.040 "data_offset": 2048, 00:15:43.040 "data_size": 63488 00:15:43.040 } 00:15:43.040 ] 00:15:43.040 }' 00:15:43.040 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.041 11:00:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.609 11:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:43.609 11:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.609 11:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:43.609 11:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:43.609 11:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.609 11:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.609 11:00:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.609 11:00:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.609 11:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.609 11:00:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.609 11:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.609 "name": "raid_bdev1", 00:15:43.609 "uuid": "66d60025-f721-4dff-96e6-2b6a6f690a1f", 00:15:43.609 "strip_size_kb": 64, 00:15:43.609 "state": "online", 00:15:43.609 "raid_level": "raid5f", 00:15:43.609 "superblock": true, 00:15:43.609 "num_base_bdevs": 3, 00:15:43.609 "num_base_bdevs_discovered": 2, 00:15:43.609 "num_base_bdevs_operational": 2, 00:15:43.609 "base_bdevs_list": [ 00:15:43.609 { 00:15:43.609 "name": null, 00:15:43.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.609 "is_configured": false, 00:15:43.609 "data_offset": 0, 00:15:43.609 "data_size": 63488 00:15:43.609 }, 00:15:43.609 { 00:15:43.609 "name": "BaseBdev2", 00:15:43.609 "uuid": "a1ea2165-ed3f-5f88-8db8-915f3abd0588", 00:15:43.609 "is_configured": true, 00:15:43.609 "data_offset": 2048, 00:15:43.609 "data_size": 63488 00:15:43.609 }, 00:15:43.609 { 00:15:43.609 "name": "BaseBdev3", 00:15:43.609 "uuid": "ab227f26-9213-5f28-839d-ec77afbd9822", 00:15:43.609 "is_configured": true, 00:15:43.609 "data_offset": 2048, 00:15:43.609 "data_size": 63488 00:15:43.609 } 00:15:43.609 ] 00:15:43.609 }' 00:15:43.609 11:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.609 11:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:43.609 11:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.609 11:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:43.609 11:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:43.609 11:00:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.609 11:00:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.609 [2024-11-15 11:00:50.463201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:43.609 [2024-11-15 11:00:50.480350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:15:43.609 11:00:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.609 11:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:43.609 [2024-11-15 11:00:50.488241] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:44.588 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.588 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.588 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.588 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.588 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.588 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.588 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.589 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.589 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.589 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.846 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.846 "name": "raid_bdev1", 00:15:44.846 "uuid": "66d60025-f721-4dff-96e6-2b6a6f690a1f", 00:15:44.846 "strip_size_kb": 64, 00:15:44.846 "state": "online", 00:15:44.846 "raid_level": "raid5f", 00:15:44.846 "superblock": true, 00:15:44.846 "num_base_bdevs": 3, 00:15:44.846 "num_base_bdevs_discovered": 3, 00:15:44.846 "num_base_bdevs_operational": 3, 00:15:44.846 "process": { 00:15:44.846 "type": "rebuild", 00:15:44.846 "target": "spare", 00:15:44.846 "progress": { 00:15:44.846 "blocks": 20480, 00:15:44.846 "percent": 16 00:15:44.846 } 00:15:44.846 }, 00:15:44.846 "base_bdevs_list": [ 00:15:44.846 { 00:15:44.846 "name": "spare", 00:15:44.846 "uuid": "a6f691bc-812e-53bc-992e-ca5399d5f645", 00:15:44.846 "is_configured": true, 00:15:44.846 "data_offset": 2048, 00:15:44.846 "data_size": 63488 00:15:44.846 }, 00:15:44.846 { 00:15:44.846 "name": "BaseBdev2", 00:15:44.846 "uuid": "a1ea2165-ed3f-5f88-8db8-915f3abd0588", 00:15:44.846 "is_configured": true, 00:15:44.846 "data_offset": 2048, 00:15:44.846 "data_size": 63488 00:15:44.846 }, 00:15:44.846 { 00:15:44.846 "name": "BaseBdev3", 00:15:44.847 "uuid": "ab227f26-9213-5f28-839d-ec77afbd9822", 00:15:44.847 "is_configured": true, 00:15:44.847 "data_offset": 2048, 00:15:44.847 "data_size": 63488 00:15:44.847 } 00:15:44.847 ] 00:15:44.847 }' 00:15:44.847 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.847 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.847 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.847 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.847 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:44.847 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:44.847 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:44.847 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:44.847 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:44.847 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=570 00:15:44.847 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:44.847 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.847 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.847 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.847 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.847 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.847 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.847 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.847 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.847 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.847 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.847 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.847 "name": "raid_bdev1", 00:15:44.847 "uuid": "66d60025-f721-4dff-96e6-2b6a6f690a1f", 00:15:44.847 "strip_size_kb": 64, 00:15:44.847 "state": "online", 00:15:44.847 "raid_level": "raid5f", 00:15:44.847 "superblock": true, 00:15:44.847 "num_base_bdevs": 3, 00:15:44.847 "num_base_bdevs_discovered": 3, 00:15:44.847 "num_base_bdevs_operational": 3, 00:15:44.847 "process": { 00:15:44.847 "type": "rebuild", 00:15:44.847 "target": "spare", 00:15:44.847 "progress": { 00:15:44.847 "blocks": 22528, 00:15:44.847 "percent": 17 00:15:44.847 } 00:15:44.847 }, 00:15:44.847 "base_bdevs_list": [ 00:15:44.847 { 00:15:44.847 "name": "spare", 00:15:44.847 "uuid": "a6f691bc-812e-53bc-992e-ca5399d5f645", 00:15:44.847 "is_configured": true, 00:15:44.847 "data_offset": 2048, 00:15:44.847 "data_size": 63488 00:15:44.847 }, 00:15:44.847 { 00:15:44.847 "name": "BaseBdev2", 00:15:44.847 "uuid": "a1ea2165-ed3f-5f88-8db8-915f3abd0588", 00:15:44.847 "is_configured": true, 00:15:44.847 "data_offset": 2048, 00:15:44.847 "data_size": 63488 00:15:44.847 }, 00:15:44.847 { 00:15:44.847 "name": "BaseBdev3", 00:15:44.847 "uuid": "ab227f26-9213-5f28-839d-ec77afbd9822", 00:15:44.847 "is_configured": true, 00:15:44.847 "data_offset": 2048, 00:15:44.847 "data_size": 63488 00:15:44.847 } 00:15:44.847 ] 00:15:44.847 }' 00:15:44.847 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.847 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.847 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.847 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.847 11:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:46.222 11:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:46.222 11:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:46.222 11:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.222 11:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:46.222 11:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:46.222 11:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.222 11:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.222 11:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.222 11:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.222 11:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.222 11:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.222 11:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.222 "name": "raid_bdev1", 00:15:46.222 "uuid": "66d60025-f721-4dff-96e6-2b6a6f690a1f", 00:15:46.222 "strip_size_kb": 64, 00:15:46.222 "state": "online", 00:15:46.222 "raid_level": "raid5f", 00:15:46.222 "superblock": true, 00:15:46.222 "num_base_bdevs": 3, 00:15:46.222 "num_base_bdevs_discovered": 3, 00:15:46.222 "num_base_bdevs_operational": 3, 00:15:46.222 "process": { 00:15:46.222 "type": "rebuild", 00:15:46.222 "target": "spare", 00:15:46.222 "progress": { 00:15:46.222 "blocks": 45056, 00:15:46.222 "percent": 35 00:15:46.222 } 00:15:46.222 }, 00:15:46.222 "base_bdevs_list": [ 00:15:46.222 { 00:15:46.222 "name": "spare", 00:15:46.222 "uuid": "a6f691bc-812e-53bc-992e-ca5399d5f645", 00:15:46.222 "is_configured": true, 00:15:46.222 "data_offset": 2048, 00:15:46.222 "data_size": 63488 00:15:46.222 }, 00:15:46.222 { 00:15:46.222 "name": "BaseBdev2", 00:15:46.222 "uuid": "a1ea2165-ed3f-5f88-8db8-915f3abd0588", 00:15:46.222 "is_configured": true, 00:15:46.222 "data_offset": 2048, 00:15:46.222 "data_size": 63488 00:15:46.222 }, 00:15:46.222 { 00:15:46.222 "name": "BaseBdev3", 00:15:46.222 "uuid": "ab227f26-9213-5f28-839d-ec77afbd9822", 00:15:46.222 "is_configured": true, 00:15:46.222 "data_offset": 2048, 00:15:46.222 "data_size": 63488 00:15:46.222 } 00:15:46.222 ] 00:15:46.222 }' 00:15:46.222 11:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.222 11:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:46.222 11:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.222 11:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:46.222 11:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:47.158 11:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:47.158 11:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.158 11:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.158 11:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.158 11:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.158 11:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.158 11:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.158 11:00:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.158 11:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.158 11:00:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.158 11:00:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.158 11:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.158 "name": "raid_bdev1", 00:15:47.158 "uuid": "66d60025-f721-4dff-96e6-2b6a6f690a1f", 00:15:47.158 "strip_size_kb": 64, 00:15:47.158 "state": "online", 00:15:47.158 "raid_level": "raid5f", 00:15:47.158 "superblock": true, 00:15:47.158 "num_base_bdevs": 3, 00:15:47.158 "num_base_bdevs_discovered": 3, 00:15:47.158 "num_base_bdevs_operational": 3, 00:15:47.158 "process": { 00:15:47.158 "type": "rebuild", 00:15:47.158 "target": "spare", 00:15:47.158 "progress": { 00:15:47.158 "blocks": 69632, 00:15:47.158 "percent": 54 00:15:47.158 } 00:15:47.158 }, 00:15:47.158 "base_bdevs_list": [ 00:15:47.158 { 00:15:47.158 "name": "spare", 00:15:47.158 "uuid": "a6f691bc-812e-53bc-992e-ca5399d5f645", 00:15:47.158 "is_configured": true, 00:15:47.158 "data_offset": 2048, 00:15:47.158 "data_size": 63488 00:15:47.158 }, 00:15:47.158 { 00:15:47.158 "name": "BaseBdev2", 00:15:47.158 "uuid": "a1ea2165-ed3f-5f88-8db8-915f3abd0588", 00:15:47.159 "is_configured": true, 00:15:47.159 "data_offset": 2048, 00:15:47.159 "data_size": 63488 00:15:47.159 }, 00:15:47.159 { 00:15:47.159 "name": "BaseBdev3", 00:15:47.159 "uuid": "ab227f26-9213-5f28-839d-ec77afbd9822", 00:15:47.159 "is_configured": true, 00:15:47.159 "data_offset": 2048, 00:15:47.159 "data_size": 63488 00:15:47.159 } 00:15:47.159 ] 00:15:47.159 }' 00:15:47.159 11:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.159 11:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:47.159 11:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.159 11:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:47.159 11:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:48.590 11:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:48.590 11:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:48.590 11:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.590 11:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:48.590 11:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:48.590 11:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.590 11:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.590 11:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.590 11:00:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.590 11:00:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.590 11:00:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.590 11:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.590 "name": "raid_bdev1", 00:15:48.590 "uuid": "66d60025-f721-4dff-96e6-2b6a6f690a1f", 00:15:48.590 "strip_size_kb": 64, 00:15:48.590 "state": "online", 00:15:48.590 "raid_level": "raid5f", 00:15:48.590 "superblock": true, 00:15:48.590 "num_base_bdevs": 3, 00:15:48.590 "num_base_bdevs_discovered": 3, 00:15:48.590 "num_base_bdevs_operational": 3, 00:15:48.590 "process": { 00:15:48.590 "type": "rebuild", 00:15:48.590 "target": "spare", 00:15:48.590 "progress": { 00:15:48.590 "blocks": 92160, 00:15:48.590 "percent": 72 00:15:48.590 } 00:15:48.590 }, 00:15:48.590 "base_bdevs_list": [ 00:15:48.590 { 00:15:48.590 "name": "spare", 00:15:48.590 "uuid": "a6f691bc-812e-53bc-992e-ca5399d5f645", 00:15:48.590 "is_configured": true, 00:15:48.590 "data_offset": 2048, 00:15:48.591 "data_size": 63488 00:15:48.591 }, 00:15:48.591 { 00:15:48.591 "name": "BaseBdev2", 00:15:48.591 "uuid": "a1ea2165-ed3f-5f88-8db8-915f3abd0588", 00:15:48.591 "is_configured": true, 00:15:48.591 "data_offset": 2048, 00:15:48.591 "data_size": 63488 00:15:48.591 }, 00:15:48.591 { 00:15:48.591 "name": "BaseBdev3", 00:15:48.591 "uuid": "ab227f26-9213-5f28-839d-ec77afbd9822", 00:15:48.591 "is_configured": true, 00:15:48.591 "data_offset": 2048, 00:15:48.591 "data_size": 63488 00:15:48.591 } 00:15:48.591 ] 00:15:48.591 }' 00:15:48.591 11:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.591 11:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:48.591 11:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.591 11:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:48.591 11:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:49.529 11:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:49.529 11:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:49.529 11:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.529 11:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:49.529 11:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:49.529 11:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.529 11:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.529 11:00:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.529 11:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.529 11:00:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.529 11:00:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.529 11:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.529 "name": "raid_bdev1", 00:15:49.529 "uuid": "66d60025-f721-4dff-96e6-2b6a6f690a1f", 00:15:49.529 "strip_size_kb": 64, 00:15:49.529 "state": "online", 00:15:49.529 "raid_level": "raid5f", 00:15:49.529 "superblock": true, 00:15:49.529 "num_base_bdevs": 3, 00:15:49.529 "num_base_bdevs_discovered": 3, 00:15:49.529 "num_base_bdevs_operational": 3, 00:15:49.529 "process": { 00:15:49.529 "type": "rebuild", 00:15:49.529 "target": "spare", 00:15:49.529 "progress": { 00:15:49.529 "blocks": 114688, 00:15:49.529 "percent": 90 00:15:49.529 } 00:15:49.529 }, 00:15:49.529 "base_bdevs_list": [ 00:15:49.529 { 00:15:49.529 "name": "spare", 00:15:49.529 "uuid": "a6f691bc-812e-53bc-992e-ca5399d5f645", 00:15:49.529 "is_configured": true, 00:15:49.529 "data_offset": 2048, 00:15:49.529 "data_size": 63488 00:15:49.529 }, 00:15:49.529 { 00:15:49.529 "name": "BaseBdev2", 00:15:49.529 "uuid": "a1ea2165-ed3f-5f88-8db8-915f3abd0588", 00:15:49.529 "is_configured": true, 00:15:49.529 "data_offset": 2048, 00:15:49.529 "data_size": 63488 00:15:49.529 }, 00:15:49.529 { 00:15:49.529 "name": "BaseBdev3", 00:15:49.529 "uuid": "ab227f26-9213-5f28-839d-ec77afbd9822", 00:15:49.529 "is_configured": true, 00:15:49.529 "data_offset": 2048, 00:15:49.529 "data_size": 63488 00:15:49.529 } 00:15:49.529 ] 00:15:49.529 }' 00:15:49.529 11:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.529 11:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:49.529 11:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.529 11:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:49.529 11:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:50.096 [2024-11-15 11:00:56.741265] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:50.096 [2024-11-15 11:00:56.741467] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:50.096 [2024-11-15 11:00:56.741621] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.664 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:50.664 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:50.664 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.664 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:50.664 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:50.664 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.664 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.664 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.664 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.664 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.664 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.664 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.664 "name": "raid_bdev1", 00:15:50.664 "uuid": "66d60025-f721-4dff-96e6-2b6a6f690a1f", 00:15:50.664 "strip_size_kb": 64, 00:15:50.664 "state": "online", 00:15:50.664 "raid_level": "raid5f", 00:15:50.664 "superblock": true, 00:15:50.664 "num_base_bdevs": 3, 00:15:50.664 "num_base_bdevs_discovered": 3, 00:15:50.664 "num_base_bdevs_operational": 3, 00:15:50.664 "base_bdevs_list": [ 00:15:50.664 { 00:15:50.664 "name": "spare", 00:15:50.664 "uuid": "a6f691bc-812e-53bc-992e-ca5399d5f645", 00:15:50.664 "is_configured": true, 00:15:50.664 "data_offset": 2048, 00:15:50.664 "data_size": 63488 00:15:50.664 }, 00:15:50.664 { 00:15:50.664 "name": "BaseBdev2", 00:15:50.664 "uuid": "a1ea2165-ed3f-5f88-8db8-915f3abd0588", 00:15:50.664 "is_configured": true, 00:15:50.664 "data_offset": 2048, 00:15:50.664 "data_size": 63488 00:15:50.664 }, 00:15:50.664 { 00:15:50.664 "name": "BaseBdev3", 00:15:50.664 "uuid": "ab227f26-9213-5f28-839d-ec77afbd9822", 00:15:50.664 "is_configured": true, 00:15:50.664 "data_offset": 2048, 00:15:50.664 "data_size": 63488 00:15:50.664 } 00:15:50.664 ] 00:15:50.664 }' 00:15:50.664 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.664 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:50.664 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.664 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:50.664 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:50.664 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:50.664 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.664 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:50.664 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:50.664 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.664 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.664 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.664 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.664 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.664 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.664 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.664 "name": "raid_bdev1", 00:15:50.664 "uuid": "66d60025-f721-4dff-96e6-2b6a6f690a1f", 00:15:50.664 "strip_size_kb": 64, 00:15:50.664 "state": "online", 00:15:50.664 "raid_level": "raid5f", 00:15:50.664 "superblock": true, 00:15:50.664 "num_base_bdevs": 3, 00:15:50.664 "num_base_bdevs_discovered": 3, 00:15:50.664 "num_base_bdevs_operational": 3, 00:15:50.664 "base_bdevs_list": [ 00:15:50.664 { 00:15:50.664 "name": "spare", 00:15:50.664 "uuid": "a6f691bc-812e-53bc-992e-ca5399d5f645", 00:15:50.664 "is_configured": true, 00:15:50.664 "data_offset": 2048, 00:15:50.664 "data_size": 63488 00:15:50.664 }, 00:15:50.664 { 00:15:50.664 "name": "BaseBdev2", 00:15:50.664 "uuid": "a1ea2165-ed3f-5f88-8db8-915f3abd0588", 00:15:50.664 "is_configured": true, 00:15:50.664 "data_offset": 2048, 00:15:50.664 "data_size": 63488 00:15:50.664 }, 00:15:50.664 { 00:15:50.664 "name": "BaseBdev3", 00:15:50.664 "uuid": "ab227f26-9213-5f28-839d-ec77afbd9822", 00:15:50.664 "is_configured": true, 00:15:50.664 "data_offset": 2048, 00:15:50.664 "data_size": 63488 00:15:50.664 } 00:15:50.664 ] 00:15:50.664 }' 00:15:50.665 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.924 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:50.924 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.924 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:50.924 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:50.924 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.924 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.924 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.924 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.924 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.924 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.924 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.924 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.924 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.924 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.924 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.924 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.924 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.924 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.924 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.924 "name": "raid_bdev1", 00:15:50.924 "uuid": "66d60025-f721-4dff-96e6-2b6a6f690a1f", 00:15:50.924 "strip_size_kb": 64, 00:15:50.924 "state": "online", 00:15:50.924 "raid_level": "raid5f", 00:15:50.924 "superblock": true, 00:15:50.924 "num_base_bdevs": 3, 00:15:50.924 "num_base_bdevs_discovered": 3, 00:15:50.924 "num_base_bdevs_operational": 3, 00:15:50.924 "base_bdevs_list": [ 00:15:50.924 { 00:15:50.924 "name": "spare", 00:15:50.924 "uuid": "a6f691bc-812e-53bc-992e-ca5399d5f645", 00:15:50.924 "is_configured": true, 00:15:50.924 "data_offset": 2048, 00:15:50.924 "data_size": 63488 00:15:50.924 }, 00:15:50.924 { 00:15:50.924 "name": "BaseBdev2", 00:15:50.924 "uuid": "a1ea2165-ed3f-5f88-8db8-915f3abd0588", 00:15:50.924 "is_configured": true, 00:15:50.924 "data_offset": 2048, 00:15:50.924 "data_size": 63488 00:15:50.924 }, 00:15:50.924 { 00:15:50.924 "name": "BaseBdev3", 00:15:50.924 "uuid": "ab227f26-9213-5f28-839d-ec77afbd9822", 00:15:50.924 "is_configured": true, 00:15:50.924 "data_offset": 2048, 00:15:50.924 "data_size": 63488 00:15:50.924 } 00:15:50.924 ] 00:15:50.924 }' 00:15:50.924 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.924 11:00:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.493 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:51.493 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.493 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.493 [2024-11-15 11:00:58.161110] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.493 [2024-11-15 11:00:58.161144] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:51.493 [2024-11-15 11:00:58.161237] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.493 [2024-11-15 11:00:58.161334] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.493 [2024-11-15 11:00:58.161352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:51.493 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.493 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.493 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:51.493 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.493 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.493 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.493 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:51.493 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:51.493 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:51.493 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:51.493 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:51.493 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:51.493 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:51.493 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:51.493 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:51.493 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:51.493 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:51.493 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:51.493 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:51.493 /dev/nbd0 00:15:51.753 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:51.753 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:51.753 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:51.753 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:15:51.753 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:51.753 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:51.753 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:51.753 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:15:51.753 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:51.753 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:51.753 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.753 1+0 records in 00:15:51.753 1+0 records out 00:15:51.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381225 s, 10.7 MB/s 00:15:51.753 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.753 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:15:51.753 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.753 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:51.753 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:15:51.753 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:51.753 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:51.753 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:51.753 /dev/nbd1 00:15:52.013 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:52.013 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:52.013 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:52.013 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:15:52.013 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:52.013 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:52.014 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:52.014 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:15:52.014 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:52.014 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:52.014 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:52.014 1+0 records in 00:15:52.014 1+0 records out 00:15:52.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408968 s, 10.0 MB/s 00:15:52.014 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.014 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:15:52.014 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.014 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:52.014 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:15:52.014 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:52.014 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:52.014 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:52.014 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:52.014 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:52.014 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:52.014 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:52.014 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:52.014 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.014 11:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:52.273 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:52.273 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:52.273 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:52.273 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.273 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.273 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:52.273 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:52.273 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.273 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.273 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:52.532 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:52.532 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:52.532 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:52.532 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.532 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.532 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:52.532 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:52.532 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.532 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:52.532 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:52.532 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.532 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.532 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.532 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:52.532 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.532 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.532 [2024-11-15 11:00:59.406847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:52.532 [2024-11-15 11:00:59.406917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.532 [2024-11-15 11:00:59.406940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:52.532 [2024-11-15 11:00:59.406951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.532 [2024-11-15 11:00:59.409425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.532 [2024-11-15 11:00:59.409468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:52.532 [2024-11-15 11:00:59.409567] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:52.532 [2024-11-15 11:00:59.409629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:52.532 [2024-11-15 11:00:59.409781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:52.532 [2024-11-15 11:00:59.409885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:52.532 spare 00:15:52.532 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.532 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:52.532 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.532 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.793 [2024-11-15 11:00:59.509805] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:52.793 [2024-11-15 11:00:59.509867] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:52.793 [2024-11-15 11:00:59.510249] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:15:52.793 [2024-11-15 11:00:59.516797] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:52.794 [2024-11-15 11:00:59.516907] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:52.794 [2024-11-15 11:00:59.517198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.794 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.794 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:52.794 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.794 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.794 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.794 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.794 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:52.794 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.794 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.794 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.794 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.794 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.794 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.794 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.794 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.794 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.794 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.794 "name": "raid_bdev1", 00:15:52.794 "uuid": "66d60025-f721-4dff-96e6-2b6a6f690a1f", 00:15:52.794 "strip_size_kb": 64, 00:15:52.794 "state": "online", 00:15:52.794 "raid_level": "raid5f", 00:15:52.794 "superblock": true, 00:15:52.794 "num_base_bdevs": 3, 00:15:52.794 "num_base_bdevs_discovered": 3, 00:15:52.794 "num_base_bdevs_operational": 3, 00:15:52.794 "base_bdevs_list": [ 00:15:52.794 { 00:15:52.794 "name": "spare", 00:15:52.794 "uuid": "a6f691bc-812e-53bc-992e-ca5399d5f645", 00:15:52.794 "is_configured": true, 00:15:52.794 "data_offset": 2048, 00:15:52.794 "data_size": 63488 00:15:52.794 }, 00:15:52.794 { 00:15:52.794 "name": "BaseBdev2", 00:15:52.794 "uuid": "a1ea2165-ed3f-5f88-8db8-915f3abd0588", 00:15:52.794 "is_configured": true, 00:15:52.794 "data_offset": 2048, 00:15:52.794 "data_size": 63488 00:15:52.794 }, 00:15:52.794 { 00:15:52.794 "name": "BaseBdev3", 00:15:52.794 "uuid": "ab227f26-9213-5f28-839d-ec77afbd9822", 00:15:52.794 "is_configured": true, 00:15:52.794 "data_offset": 2048, 00:15:52.794 "data_size": 63488 00:15:52.794 } 00:15:52.794 ] 00:15:52.794 }' 00:15:52.794 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.794 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.056 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:53.056 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.056 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:53.056 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:53.056 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.056 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.056 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.056 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.056 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.314 11:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.314 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.314 "name": "raid_bdev1", 00:15:53.314 "uuid": "66d60025-f721-4dff-96e6-2b6a6f690a1f", 00:15:53.314 "strip_size_kb": 64, 00:15:53.314 "state": "online", 00:15:53.314 "raid_level": "raid5f", 00:15:53.314 "superblock": true, 00:15:53.314 "num_base_bdevs": 3, 00:15:53.314 "num_base_bdevs_discovered": 3, 00:15:53.314 "num_base_bdevs_operational": 3, 00:15:53.314 "base_bdevs_list": [ 00:15:53.314 { 00:15:53.314 "name": "spare", 00:15:53.314 "uuid": "a6f691bc-812e-53bc-992e-ca5399d5f645", 00:15:53.314 "is_configured": true, 00:15:53.314 "data_offset": 2048, 00:15:53.314 "data_size": 63488 00:15:53.314 }, 00:15:53.314 { 00:15:53.314 "name": "BaseBdev2", 00:15:53.314 "uuid": "a1ea2165-ed3f-5f88-8db8-915f3abd0588", 00:15:53.314 "is_configured": true, 00:15:53.314 "data_offset": 2048, 00:15:53.314 "data_size": 63488 00:15:53.314 }, 00:15:53.314 { 00:15:53.314 "name": "BaseBdev3", 00:15:53.314 "uuid": "ab227f26-9213-5f28-839d-ec77afbd9822", 00:15:53.314 "is_configured": true, 00:15:53.314 "data_offset": 2048, 00:15:53.314 "data_size": 63488 00:15:53.314 } 00:15:53.314 ] 00:15:53.314 }' 00:15:53.314 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.314 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:53.314 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.314 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:53.314 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.314 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.314 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:53.314 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.314 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.314 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:53.314 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:53.314 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.314 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.314 [2024-11-15 11:01:00.172212] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:53.314 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.315 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:53.315 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.315 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.315 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.315 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.315 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:53.315 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.315 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.315 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.315 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.315 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.315 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.315 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.315 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.315 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.315 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.315 "name": "raid_bdev1", 00:15:53.315 "uuid": "66d60025-f721-4dff-96e6-2b6a6f690a1f", 00:15:53.315 "strip_size_kb": 64, 00:15:53.315 "state": "online", 00:15:53.315 "raid_level": "raid5f", 00:15:53.315 "superblock": true, 00:15:53.315 "num_base_bdevs": 3, 00:15:53.315 "num_base_bdevs_discovered": 2, 00:15:53.315 "num_base_bdevs_operational": 2, 00:15:53.315 "base_bdevs_list": [ 00:15:53.315 { 00:15:53.315 "name": null, 00:15:53.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.315 "is_configured": false, 00:15:53.315 "data_offset": 0, 00:15:53.315 "data_size": 63488 00:15:53.315 }, 00:15:53.315 { 00:15:53.315 "name": "BaseBdev2", 00:15:53.315 "uuid": "a1ea2165-ed3f-5f88-8db8-915f3abd0588", 00:15:53.315 "is_configured": true, 00:15:53.315 "data_offset": 2048, 00:15:53.315 "data_size": 63488 00:15:53.315 }, 00:15:53.315 { 00:15:53.315 "name": "BaseBdev3", 00:15:53.315 "uuid": "ab227f26-9213-5f28-839d-ec77afbd9822", 00:15:53.315 "is_configured": true, 00:15:53.315 "data_offset": 2048, 00:15:53.315 "data_size": 63488 00:15:53.315 } 00:15:53.315 ] 00:15:53.315 }' 00:15:53.315 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.315 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.881 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:53.881 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.881 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.881 [2024-11-15 11:01:00.671436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:53.881 [2024-11-15 11:01:00.671710] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:53.881 [2024-11-15 11:01:00.671781] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:53.881 [2024-11-15 11:01:00.671877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:53.881 [2024-11-15 11:01:00.690591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:15:53.881 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.881 11:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:53.881 [2024-11-15 11:01:00.699196] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:54.817 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:54.817 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.817 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:54.817 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:54.817 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.817 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.817 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.817 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.817 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.817 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.077 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.077 "name": "raid_bdev1", 00:15:55.077 "uuid": "66d60025-f721-4dff-96e6-2b6a6f690a1f", 00:15:55.077 "strip_size_kb": 64, 00:15:55.077 "state": "online", 00:15:55.077 "raid_level": "raid5f", 00:15:55.077 "superblock": true, 00:15:55.077 "num_base_bdevs": 3, 00:15:55.077 "num_base_bdevs_discovered": 3, 00:15:55.077 "num_base_bdevs_operational": 3, 00:15:55.077 "process": { 00:15:55.077 "type": "rebuild", 00:15:55.077 "target": "spare", 00:15:55.077 "progress": { 00:15:55.077 "blocks": 20480, 00:15:55.077 "percent": 16 00:15:55.077 } 00:15:55.077 }, 00:15:55.077 "base_bdevs_list": [ 00:15:55.077 { 00:15:55.077 "name": "spare", 00:15:55.077 "uuid": "a6f691bc-812e-53bc-992e-ca5399d5f645", 00:15:55.077 "is_configured": true, 00:15:55.077 "data_offset": 2048, 00:15:55.077 "data_size": 63488 00:15:55.077 }, 00:15:55.077 { 00:15:55.077 "name": "BaseBdev2", 00:15:55.077 "uuid": "a1ea2165-ed3f-5f88-8db8-915f3abd0588", 00:15:55.077 "is_configured": true, 00:15:55.077 "data_offset": 2048, 00:15:55.077 "data_size": 63488 00:15:55.077 }, 00:15:55.077 { 00:15:55.077 "name": "BaseBdev3", 00:15:55.077 "uuid": "ab227f26-9213-5f28-839d-ec77afbd9822", 00:15:55.077 "is_configured": true, 00:15:55.077 "data_offset": 2048, 00:15:55.077 "data_size": 63488 00:15:55.077 } 00:15:55.077 ] 00:15:55.077 }' 00:15:55.077 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.077 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:55.077 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.077 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:55.077 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:55.077 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.077 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.077 [2024-11-15 11:01:01.858630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:55.077 [2024-11-15 11:01:01.910672] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:55.077 [2024-11-15 11:01:01.910756] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.077 [2024-11-15 11:01:01.910775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:55.077 [2024-11-15 11:01:01.910785] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:55.077 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.077 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:55.077 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.077 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.077 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.077 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.077 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:55.077 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.077 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.077 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.077 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.077 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.077 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.077 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.077 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.077 11:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.335 11:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.335 "name": "raid_bdev1", 00:15:55.335 "uuid": "66d60025-f721-4dff-96e6-2b6a6f690a1f", 00:15:55.335 "strip_size_kb": 64, 00:15:55.335 "state": "online", 00:15:55.335 "raid_level": "raid5f", 00:15:55.335 "superblock": true, 00:15:55.335 "num_base_bdevs": 3, 00:15:55.335 "num_base_bdevs_discovered": 2, 00:15:55.335 "num_base_bdevs_operational": 2, 00:15:55.335 "base_bdevs_list": [ 00:15:55.335 { 00:15:55.335 "name": null, 00:15:55.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.335 "is_configured": false, 00:15:55.335 "data_offset": 0, 00:15:55.335 "data_size": 63488 00:15:55.335 }, 00:15:55.335 { 00:15:55.335 "name": "BaseBdev2", 00:15:55.335 "uuid": "a1ea2165-ed3f-5f88-8db8-915f3abd0588", 00:15:55.335 "is_configured": true, 00:15:55.335 "data_offset": 2048, 00:15:55.335 "data_size": 63488 00:15:55.335 }, 00:15:55.335 { 00:15:55.335 "name": "BaseBdev3", 00:15:55.335 "uuid": "ab227f26-9213-5f28-839d-ec77afbd9822", 00:15:55.335 "is_configured": true, 00:15:55.335 "data_offset": 2048, 00:15:55.335 "data_size": 63488 00:15:55.335 } 00:15:55.335 ] 00:15:55.335 }' 00:15:55.335 11:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.335 11:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.594 11:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:55.594 11:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.594 11:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.594 [2024-11-15 11:01:02.493802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:55.594 [2024-11-15 11:01:02.493955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.594 [2024-11-15 11:01:02.493999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:55.594 [2024-11-15 11:01:02.494039] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.594 [2024-11-15 11:01:02.494616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.594 [2024-11-15 11:01:02.494685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:55.594 [2024-11-15 11:01:02.494827] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:55.594 [2024-11-15 11:01:02.494875] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:55.594 [2024-11-15 11:01:02.494922] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:55.594 [2024-11-15 11:01:02.494985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:55.594 [2024-11-15 11:01:02.512410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:15:55.594 spare 00:15:55.594 11:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.594 11:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:55.852 [2024-11-15 11:01:02.520626] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:56.785 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:56.785 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.785 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:56.785 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:56.785 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.785 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.785 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.785 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.785 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.785 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.785 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.785 "name": "raid_bdev1", 00:15:56.785 "uuid": "66d60025-f721-4dff-96e6-2b6a6f690a1f", 00:15:56.785 "strip_size_kb": 64, 00:15:56.786 "state": "online", 00:15:56.786 "raid_level": "raid5f", 00:15:56.786 "superblock": true, 00:15:56.786 "num_base_bdevs": 3, 00:15:56.786 "num_base_bdevs_discovered": 3, 00:15:56.786 "num_base_bdevs_operational": 3, 00:15:56.786 "process": { 00:15:56.786 "type": "rebuild", 00:15:56.786 "target": "spare", 00:15:56.786 "progress": { 00:15:56.786 "blocks": 20480, 00:15:56.786 "percent": 16 00:15:56.786 } 00:15:56.786 }, 00:15:56.786 "base_bdevs_list": [ 00:15:56.786 { 00:15:56.786 "name": "spare", 00:15:56.786 "uuid": "a6f691bc-812e-53bc-992e-ca5399d5f645", 00:15:56.786 "is_configured": true, 00:15:56.786 "data_offset": 2048, 00:15:56.786 "data_size": 63488 00:15:56.786 }, 00:15:56.786 { 00:15:56.786 "name": "BaseBdev2", 00:15:56.786 "uuid": "a1ea2165-ed3f-5f88-8db8-915f3abd0588", 00:15:56.786 "is_configured": true, 00:15:56.786 "data_offset": 2048, 00:15:56.786 "data_size": 63488 00:15:56.786 }, 00:15:56.786 { 00:15:56.786 "name": "BaseBdev3", 00:15:56.786 "uuid": "ab227f26-9213-5f28-839d-ec77afbd9822", 00:15:56.786 "is_configured": true, 00:15:56.786 "data_offset": 2048, 00:15:56.786 "data_size": 63488 00:15:56.786 } 00:15:56.786 ] 00:15:56.786 }' 00:15:56.786 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.786 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:56.786 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.786 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:56.786 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:56.786 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.786 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.786 [2024-11-15 11:01:03.696643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:57.044 [2024-11-15 11:01:03.731630] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:57.044 [2024-11-15 11:01:03.731699] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.044 [2024-11-15 11:01:03.731720] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:57.044 [2024-11-15 11:01:03.731729] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:57.044 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.044 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:57.044 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.044 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.044 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.044 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.044 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:57.044 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.044 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.044 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.044 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.044 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.044 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.044 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.044 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.044 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.044 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.044 "name": "raid_bdev1", 00:15:57.044 "uuid": "66d60025-f721-4dff-96e6-2b6a6f690a1f", 00:15:57.044 "strip_size_kb": 64, 00:15:57.044 "state": "online", 00:15:57.044 "raid_level": "raid5f", 00:15:57.044 "superblock": true, 00:15:57.044 "num_base_bdevs": 3, 00:15:57.044 "num_base_bdevs_discovered": 2, 00:15:57.044 "num_base_bdevs_operational": 2, 00:15:57.044 "base_bdevs_list": [ 00:15:57.044 { 00:15:57.044 "name": null, 00:15:57.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.044 "is_configured": false, 00:15:57.044 "data_offset": 0, 00:15:57.044 "data_size": 63488 00:15:57.044 }, 00:15:57.044 { 00:15:57.044 "name": "BaseBdev2", 00:15:57.044 "uuid": "a1ea2165-ed3f-5f88-8db8-915f3abd0588", 00:15:57.044 "is_configured": true, 00:15:57.044 "data_offset": 2048, 00:15:57.044 "data_size": 63488 00:15:57.044 }, 00:15:57.044 { 00:15:57.044 "name": "BaseBdev3", 00:15:57.044 "uuid": "ab227f26-9213-5f28-839d-ec77afbd9822", 00:15:57.044 "is_configured": true, 00:15:57.044 "data_offset": 2048, 00:15:57.044 "data_size": 63488 00:15:57.044 } 00:15:57.044 ] 00:15:57.044 }' 00:15:57.044 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.044 11:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.302 11:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:57.302 11:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.302 11:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:57.302 11:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:57.302 11:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.560 11:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.560 11:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.560 11:01:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.560 11:01:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.560 11:01:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.560 11:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.560 "name": "raid_bdev1", 00:15:57.560 "uuid": "66d60025-f721-4dff-96e6-2b6a6f690a1f", 00:15:57.560 "strip_size_kb": 64, 00:15:57.560 "state": "online", 00:15:57.560 "raid_level": "raid5f", 00:15:57.560 "superblock": true, 00:15:57.560 "num_base_bdevs": 3, 00:15:57.560 "num_base_bdevs_discovered": 2, 00:15:57.560 "num_base_bdevs_operational": 2, 00:15:57.560 "base_bdevs_list": [ 00:15:57.560 { 00:15:57.560 "name": null, 00:15:57.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.560 "is_configured": false, 00:15:57.560 "data_offset": 0, 00:15:57.560 "data_size": 63488 00:15:57.560 }, 00:15:57.560 { 00:15:57.560 "name": "BaseBdev2", 00:15:57.560 "uuid": "a1ea2165-ed3f-5f88-8db8-915f3abd0588", 00:15:57.560 "is_configured": true, 00:15:57.560 "data_offset": 2048, 00:15:57.560 "data_size": 63488 00:15:57.560 }, 00:15:57.560 { 00:15:57.560 "name": "BaseBdev3", 00:15:57.560 "uuid": "ab227f26-9213-5f28-839d-ec77afbd9822", 00:15:57.560 "is_configured": true, 00:15:57.560 "data_offset": 2048, 00:15:57.560 "data_size": 63488 00:15:57.560 } 00:15:57.560 ] 00:15:57.560 }' 00:15:57.560 11:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.560 11:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:57.560 11:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.560 11:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:57.560 11:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:57.561 11:01:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.561 11:01:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.561 11:01:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.561 11:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:57.561 11:01:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.561 11:01:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.561 [2024-11-15 11:01:04.354161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:57.561 [2024-11-15 11:01:04.354219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.561 [2024-11-15 11:01:04.354242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:57.561 [2024-11-15 11:01:04.354268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.561 [2024-11-15 11:01:04.354754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.561 [2024-11-15 11:01:04.354772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:57.561 [2024-11-15 11:01:04.354850] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:57.561 [2024-11-15 11:01:04.354869] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:57.561 [2024-11-15 11:01:04.354889] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:57.561 [2024-11-15 11:01:04.354899] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:57.561 BaseBdev1 00:15:57.561 11:01:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.561 11:01:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:58.497 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:58.497 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.497 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.497 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.497 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.497 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.497 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.497 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.497 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.497 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.497 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.497 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.497 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.498 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.498 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.498 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.498 "name": "raid_bdev1", 00:15:58.498 "uuid": "66d60025-f721-4dff-96e6-2b6a6f690a1f", 00:15:58.498 "strip_size_kb": 64, 00:15:58.498 "state": "online", 00:15:58.498 "raid_level": "raid5f", 00:15:58.498 "superblock": true, 00:15:58.498 "num_base_bdevs": 3, 00:15:58.498 "num_base_bdevs_discovered": 2, 00:15:58.498 "num_base_bdevs_operational": 2, 00:15:58.498 "base_bdevs_list": [ 00:15:58.498 { 00:15:58.498 "name": null, 00:15:58.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.498 "is_configured": false, 00:15:58.498 "data_offset": 0, 00:15:58.498 "data_size": 63488 00:15:58.498 }, 00:15:58.498 { 00:15:58.498 "name": "BaseBdev2", 00:15:58.498 "uuid": "a1ea2165-ed3f-5f88-8db8-915f3abd0588", 00:15:58.498 "is_configured": true, 00:15:58.498 "data_offset": 2048, 00:15:58.498 "data_size": 63488 00:15:58.498 }, 00:15:58.498 { 00:15:58.498 "name": "BaseBdev3", 00:15:58.498 "uuid": "ab227f26-9213-5f28-839d-ec77afbd9822", 00:15:58.498 "is_configured": true, 00:15:58.498 "data_offset": 2048, 00:15:58.498 "data_size": 63488 00:15:58.498 } 00:15:58.498 ] 00:15:58.498 }' 00:15:58.498 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.498 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.067 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:59.067 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.067 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:59.067 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:59.067 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.067 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.067 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.067 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.067 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.067 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.067 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.067 "name": "raid_bdev1", 00:15:59.067 "uuid": "66d60025-f721-4dff-96e6-2b6a6f690a1f", 00:15:59.067 "strip_size_kb": 64, 00:15:59.067 "state": "online", 00:15:59.068 "raid_level": "raid5f", 00:15:59.068 "superblock": true, 00:15:59.068 "num_base_bdevs": 3, 00:15:59.068 "num_base_bdevs_discovered": 2, 00:15:59.068 "num_base_bdevs_operational": 2, 00:15:59.068 "base_bdevs_list": [ 00:15:59.068 { 00:15:59.068 "name": null, 00:15:59.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.068 "is_configured": false, 00:15:59.068 "data_offset": 0, 00:15:59.068 "data_size": 63488 00:15:59.068 }, 00:15:59.068 { 00:15:59.068 "name": "BaseBdev2", 00:15:59.068 "uuid": "a1ea2165-ed3f-5f88-8db8-915f3abd0588", 00:15:59.068 "is_configured": true, 00:15:59.068 "data_offset": 2048, 00:15:59.068 "data_size": 63488 00:15:59.068 }, 00:15:59.068 { 00:15:59.068 "name": "BaseBdev3", 00:15:59.068 "uuid": "ab227f26-9213-5f28-839d-ec77afbd9822", 00:15:59.068 "is_configured": true, 00:15:59.068 "data_offset": 2048, 00:15:59.068 "data_size": 63488 00:15:59.068 } 00:15:59.068 ] 00:15:59.068 }' 00:15:59.068 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.068 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:59.068 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.068 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:59.068 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:59.068 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:15:59.068 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:59.068 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:59.068 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:59.068 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:59.068 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:59.068 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:59.068 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.068 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.068 [2024-11-15 11:01:05.968480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:59.068 [2024-11-15 11:01:05.968644] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:59.068 [2024-11-15 11:01:05.968659] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:59.068 request: 00:15:59.068 { 00:15:59.068 "base_bdev": "BaseBdev1", 00:15:59.068 "raid_bdev": "raid_bdev1", 00:15:59.068 "method": "bdev_raid_add_base_bdev", 00:15:59.068 "req_id": 1 00:15:59.068 } 00:15:59.068 Got JSON-RPC error response 00:15:59.068 response: 00:15:59.068 { 00:15:59.068 "code": -22, 00:15:59.068 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:59.068 } 00:15:59.068 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:59.068 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:15:59.068 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:59.068 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:59.068 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:59.068 11:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:00.448 11:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:00.448 11:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.448 11:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.448 11:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.448 11:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.448 11:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:00.448 11:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.448 11:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.448 11:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.448 11:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.448 11:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.448 11:01:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.448 11:01:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.448 11:01:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.448 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.448 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.448 "name": "raid_bdev1", 00:16:00.448 "uuid": "66d60025-f721-4dff-96e6-2b6a6f690a1f", 00:16:00.448 "strip_size_kb": 64, 00:16:00.448 "state": "online", 00:16:00.448 "raid_level": "raid5f", 00:16:00.449 "superblock": true, 00:16:00.449 "num_base_bdevs": 3, 00:16:00.449 "num_base_bdevs_discovered": 2, 00:16:00.449 "num_base_bdevs_operational": 2, 00:16:00.449 "base_bdevs_list": [ 00:16:00.449 { 00:16:00.449 "name": null, 00:16:00.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.449 "is_configured": false, 00:16:00.449 "data_offset": 0, 00:16:00.449 "data_size": 63488 00:16:00.449 }, 00:16:00.449 { 00:16:00.449 "name": "BaseBdev2", 00:16:00.449 "uuid": "a1ea2165-ed3f-5f88-8db8-915f3abd0588", 00:16:00.449 "is_configured": true, 00:16:00.449 "data_offset": 2048, 00:16:00.449 "data_size": 63488 00:16:00.449 }, 00:16:00.449 { 00:16:00.449 "name": "BaseBdev3", 00:16:00.449 "uuid": "ab227f26-9213-5f28-839d-ec77afbd9822", 00:16:00.449 "is_configured": true, 00:16:00.449 "data_offset": 2048, 00:16:00.449 "data_size": 63488 00:16:00.449 } 00:16:00.449 ] 00:16:00.449 }' 00:16:00.449 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.449 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.709 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:00.709 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.709 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:00.709 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:00.709 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.709 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.709 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.709 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.709 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.709 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.709 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.709 "name": "raid_bdev1", 00:16:00.709 "uuid": "66d60025-f721-4dff-96e6-2b6a6f690a1f", 00:16:00.709 "strip_size_kb": 64, 00:16:00.709 "state": "online", 00:16:00.709 "raid_level": "raid5f", 00:16:00.709 "superblock": true, 00:16:00.709 "num_base_bdevs": 3, 00:16:00.709 "num_base_bdevs_discovered": 2, 00:16:00.709 "num_base_bdevs_operational": 2, 00:16:00.709 "base_bdevs_list": [ 00:16:00.709 { 00:16:00.709 "name": null, 00:16:00.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.709 "is_configured": false, 00:16:00.709 "data_offset": 0, 00:16:00.709 "data_size": 63488 00:16:00.709 }, 00:16:00.709 { 00:16:00.709 "name": "BaseBdev2", 00:16:00.709 "uuid": "a1ea2165-ed3f-5f88-8db8-915f3abd0588", 00:16:00.709 "is_configured": true, 00:16:00.709 "data_offset": 2048, 00:16:00.709 "data_size": 63488 00:16:00.709 }, 00:16:00.709 { 00:16:00.709 "name": "BaseBdev3", 00:16:00.709 "uuid": "ab227f26-9213-5f28-839d-ec77afbd9822", 00:16:00.709 "is_configured": true, 00:16:00.709 "data_offset": 2048, 00:16:00.709 "data_size": 63488 00:16:00.709 } 00:16:00.709 ] 00:16:00.709 }' 00:16:00.709 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.709 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:00.709 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.709 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:00.709 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82189 00:16:00.709 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 82189 ']' 00:16:00.709 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 82189 00:16:00.709 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:16:00.709 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:00.709 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82189 00:16:00.969 killing process with pid 82189 00:16:00.969 Received shutdown signal, test time was about 60.000000 seconds 00:16:00.969 00:16:00.969 Latency(us) 00:16:00.969 [2024-11-15T11:01:07.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.969 [2024-11-15T11:01:07.897Z] =================================================================================================================== 00:16:00.969 [2024-11-15T11:01:07.897Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:00.969 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:00.969 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:00.969 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82189' 00:16:00.969 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 82189 00:16:00.969 [2024-11-15 11:01:07.660002] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:00.969 [2024-11-15 11:01:07.660142] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.969 11:01:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 82189 00:16:00.969 [2024-11-15 11:01:07.660211] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.969 [2024-11-15 11:01:07.660225] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:01.229 [2024-11-15 11:01:08.061567] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:02.611 11:01:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:02.611 ************************************ 00:16:02.611 END TEST raid5f_rebuild_test_sb 00:16:02.611 ************************************ 00:16:02.611 00:16:02.611 real 0m23.637s 00:16:02.611 user 0m30.529s 00:16:02.611 sys 0m2.821s 00:16:02.612 11:01:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:02.612 11:01:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.612 11:01:09 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:02.612 11:01:09 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:16:02.612 11:01:09 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:02.612 11:01:09 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:02.612 11:01:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:02.612 ************************************ 00:16:02.612 START TEST raid5f_state_function_test 00:16:02.612 ************************************ 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 false 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82948 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82948' 00:16:02.612 Process raid pid: 82948 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82948 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 82948 ']' 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:02.612 11:01:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.612 [2024-11-15 11:01:09.312357] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:16:02.612 [2024-11-15 11:01:09.312487] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:02.612 [2024-11-15 11:01:09.473567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.872 [2024-11-15 11:01:09.585166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.872 [2024-11-15 11:01:09.793244] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:02.872 [2024-11-15 11:01:09.793284] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.442 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:03.442 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:16:03.442 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:03.442 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.442 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.442 [2024-11-15 11:01:10.150983] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:03.442 [2024-11-15 11:01:10.151125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:03.442 [2024-11-15 11:01:10.151140] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:03.442 [2024-11-15 11:01:10.151150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:03.442 [2024-11-15 11:01:10.151157] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:03.442 [2024-11-15 11:01:10.151166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:03.442 [2024-11-15 11:01:10.151172] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:03.442 [2024-11-15 11:01:10.151181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:03.442 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.442 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:03.442 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.442 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.442 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.442 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.442 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:03.442 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.442 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.442 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.442 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.442 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.442 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.442 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.442 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.442 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.442 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.442 "name": "Existed_Raid", 00:16:03.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.442 "strip_size_kb": 64, 00:16:03.442 "state": "configuring", 00:16:03.442 "raid_level": "raid5f", 00:16:03.442 "superblock": false, 00:16:03.442 "num_base_bdevs": 4, 00:16:03.442 "num_base_bdevs_discovered": 0, 00:16:03.442 "num_base_bdevs_operational": 4, 00:16:03.442 "base_bdevs_list": [ 00:16:03.442 { 00:16:03.442 "name": "BaseBdev1", 00:16:03.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.442 "is_configured": false, 00:16:03.442 "data_offset": 0, 00:16:03.442 "data_size": 0 00:16:03.442 }, 00:16:03.442 { 00:16:03.442 "name": "BaseBdev2", 00:16:03.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.442 "is_configured": false, 00:16:03.442 "data_offset": 0, 00:16:03.442 "data_size": 0 00:16:03.442 }, 00:16:03.442 { 00:16:03.442 "name": "BaseBdev3", 00:16:03.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.442 "is_configured": false, 00:16:03.442 "data_offset": 0, 00:16:03.442 "data_size": 0 00:16:03.442 }, 00:16:03.442 { 00:16:03.442 "name": "BaseBdev4", 00:16:03.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.442 "is_configured": false, 00:16:03.442 "data_offset": 0, 00:16:03.442 "data_size": 0 00:16:03.442 } 00:16:03.442 ] 00:16:03.442 }' 00:16:03.442 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.442 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.702 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:03.702 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.702 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.702 [2024-11-15 11:01:10.602197] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:03.702 [2024-11-15 11:01:10.602314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:03.702 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.702 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:03.702 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.702 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.702 [2024-11-15 11:01:10.614140] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:03.702 [2024-11-15 11:01:10.614223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:03.702 [2024-11-15 11:01:10.614273] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:03.702 [2024-11-15 11:01:10.614297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:03.702 [2024-11-15 11:01:10.614356] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:03.702 [2024-11-15 11:01:10.614380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:03.702 [2024-11-15 11:01:10.614398] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:03.702 [2024-11-15 11:01:10.614457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:03.702 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.702 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:03.702 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.702 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.963 [2024-11-15 11:01:10.661027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:03.963 BaseBdev1 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.963 [ 00:16:03.963 { 00:16:03.963 "name": "BaseBdev1", 00:16:03.963 "aliases": [ 00:16:03.963 "8b3d43f6-20bc-4acb-b53a-d90da6cf9074" 00:16:03.963 ], 00:16:03.963 "product_name": "Malloc disk", 00:16:03.963 "block_size": 512, 00:16:03.963 "num_blocks": 65536, 00:16:03.963 "uuid": "8b3d43f6-20bc-4acb-b53a-d90da6cf9074", 00:16:03.963 "assigned_rate_limits": { 00:16:03.963 "rw_ios_per_sec": 0, 00:16:03.963 "rw_mbytes_per_sec": 0, 00:16:03.963 "r_mbytes_per_sec": 0, 00:16:03.963 "w_mbytes_per_sec": 0 00:16:03.963 }, 00:16:03.963 "claimed": true, 00:16:03.963 "claim_type": "exclusive_write", 00:16:03.963 "zoned": false, 00:16:03.963 "supported_io_types": { 00:16:03.963 "read": true, 00:16:03.963 "write": true, 00:16:03.963 "unmap": true, 00:16:03.963 "flush": true, 00:16:03.963 "reset": true, 00:16:03.963 "nvme_admin": false, 00:16:03.963 "nvme_io": false, 00:16:03.963 "nvme_io_md": false, 00:16:03.963 "write_zeroes": true, 00:16:03.963 "zcopy": true, 00:16:03.963 "get_zone_info": false, 00:16:03.963 "zone_management": false, 00:16:03.963 "zone_append": false, 00:16:03.963 "compare": false, 00:16:03.963 "compare_and_write": false, 00:16:03.963 "abort": true, 00:16:03.963 "seek_hole": false, 00:16:03.963 "seek_data": false, 00:16:03.963 "copy": true, 00:16:03.963 "nvme_iov_md": false 00:16:03.963 }, 00:16:03.963 "memory_domains": [ 00:16:03.963 { 00:16:03.963 "dma_device_id": "system", 00:16:03.963 "dma_device_type": 1 00:16:03.963 }, 00:16:03.963 { 00:16:03.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.963 "dma_device_type": 2 00:16:03.963 } 00:16:03.963 ], 00:16:03.963 "driver_specific": {} 00:16:03.963 } 00:16:03.963 ] 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.963 "name": "Existed_Raid", 00:16:03.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.963 "strip_size_kb": 64, 00:16:03.963 "state": "configuring", 00:16:03.963 "raid_level": "raid5f", 00:16:03.963 "superblock": false, 00:16:03.963 "num_base_bdevs": 4, 00:16:03.963 "num_base_bdevs_discovered": 1, 00:16:03.963 "num_base_bdevs_operational": 4, 00:16:03.963 "base_bdevs_list": [ 00:16:03.963 { 00:16:03.963 "name": "BaseBdev1", 00:16:03.963 "uuid": "8b3d43f6-20bc-4acb-b53a-d90da6cf9074", 00:16:03.963 "is_configured": true, 00:16:03.963 "data_offset": 0, 00:16:03.963 "data_size": 65536 00:16:03.963 }, 00:16:03.963 { 00:16:03.963 "name": "BaseBdev2", 00:16:03.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.963 "is_configured": false, 00:16:03.963 "data_offset": 0, 00:16:03.963 "data_size": 0 00:16:03.963 }, 00:16:03.963 { 00:16:03.963 "name": "BaseBdev3", 00:16:03.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.963 "is_configured": false, 00:16:03.963 "data_offset": 0, 00:16:03.963 "data_size": 0 00:16:03.963 }, 00:16:03.963 { 00:16:03.963 "name": "BaseBdev4", 00:16:03.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.963 "is_configured": false, 00:16:03.963 "data_offset": 0, 00:16:03.963 "data_size": 0 00:16:03.963 } 00:16:03.963 ] 00:16:03.963 }' 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.963 11:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.536 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:04.536 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.536 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.536 [2024-11-15 11:01:11.176205] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:04.536 [2024-11-15 11:01:11.176321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:04.536 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.536 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:04.536 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.536 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.536 [2024-11-15 11:01:11.188230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:04.536 [2024-11-15 11:01:11.190153] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:04.536 [2024-11-15 11:01:11.190198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:04.536 [2024-11-15 11:01:11.190208] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:04.536 [2024-11-15 11:01:11.190219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:04.536 [2024-11-15 11:01:11.190227] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:04.536 [2024-11-15 11:01:11.190236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:04.536 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.536 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:04.536 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:04.536 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:04.536 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.536 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:04.536 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.536 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.536 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.536 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.536 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.536 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.536 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.536 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.536 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.536 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.536 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.536 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.536 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.536 "name": "Existed_Raid", 00:16:04.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.536 "strip_size_kb": 64, 00:16:04.536 "state": "configuring", 00:16:04.536 "raid_level": "raid5f", 00:16:04.536 "superblock": false, 00:16:04.536 "num_base_bdevs": 4, 00:16:04.536 "num_base_bdevs_discovered": 1, 00:16:04.536 "num_base_bdevs_operational": 4, 00:16:04.536 "base_bdevs_list": [ 00:16:04.536 { 00:16:04.536 "name": "BaseBdev1", 00:16:04.536 "uuid": "8b3d43f6-20bc-4acb-b53a-d90da6cf9074", 00:16:04.536 "is_configured": true, 00:16:04.536 "data_offset": 0, 00:16:04.536 "data_size": 65536 00:16:04.536 }, 00:16:04.536 { 00:16:04.536 "name": "BaseBdev2", 00:16:04.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.536 "is_configured": false, 00:16:04.536 "data_offset": 0, 00:16:04.536 "data_size": 0 00:16:04.536 }, 00:16:04.536 { 00:16:04.536 "name": "BaseBdev3", 00:16:04.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.536 "is_configured": false, 00:16:04.536 "data_offset": 0, 00:16:04.536 "data_size": 0 00:16:04.536 }, 00:16:04.536 { 00:16:04.536 "name": "BaseBdev4", 00:16:04.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.536 "is_configured": false, 00:16:04.536 "data_offset": 0, 00:16:04.536 "data_size": 0 00:16:04.536 } 00:16:04.536 ] 00:16:04.536 }' 00:16:04.536 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.536 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.801 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:04.801 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.801 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.801 [2024-11-15 11:01:11.669466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:04.801 BaseBdev2 00:16:04.801 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.801 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:04.801 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:04.801 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:04.801 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:04.801 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:04.801 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:04.801 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:04.801 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.801 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.801 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.801 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:04.801 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.802 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.802 [ 00:16:04.802 { 00:16:04.802 "name": "BaseBdev2", 00:16:04.802 "aliases": [ 00:16:04.802 "1c1effe8-e90c-4bb8-9e4e-a0116f576203" 00:16:04.802 ], 00:16:04.802 "product_name": "Malloc disk", 00:16:04.802 "block_size": 512, 00:16:04.802 "num_blocks": 65536, 00:16:04.802 "uuid": "1c1effe8-e90c-4bb8-9e4e-a0116f576203", 00:16:04.802 "assigned_rate_limits": { 00:16:04.802 "rw_ios_per_sec": 0, 00:16:04.802 "rw_mbytes_per_sec": 0, 00:16:04.802 "r_mbytes_per_sec": 0, 00:16:04.802 "w_mbytes_per_sec": 0 00:16:04.802 }, 00:16:04.802 "claimed": true, 00:16:04.802 "claim_type": "exclusive_write", 00:16:04.802 "zoned": false, 00:16:04.802 "supported_io_types": { 00:16:04.802 "read": true, 00:16:04.802 "write": true, 00:16:04.802 "unmap": true, 00:16:04.802 "flush": true, 00:16:04.802 "reset": true, 00:16:04.802 "nvme_admin": false, 00:16:04.802 "nvme_io": false, 00:16:04.802 "nvme_io_md": false, 00:16:04.802 "write_zeroes": true, 00:16:04.802 "zcopy": true, 00:16:04.802 "get_zone_info": false, 00:16:04.802 "zone_management": false, 00:16:04.802 "zone_append": false, 00:16:04.802 "compare": false, 00:16:04.802 "compare_and_write": false, 00:16:04.802 "abort": true, 00:16:04.802 "seek_hole": false, 00:16:04.802 "seek_data": false, 00:16:04.802 "copy": true, 00:16:04.802 "nvme_iov_md": false 00:16:04.802 }, 00:16:04.802 "memory_domains": [ 00:16:04.802 { 00:16:04.802 "dma_device_id": "system", 00:16:04.802 "dma_device_type": 1 00:16:04.802 }, 00:16:04.802 { 00:16:04.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.802 "dma_device_type": 2 00:16:04.802 } 00:16:04.802 ], 00:16:04.802 "driver_specific": {} 00:16:04.802 } 00:16:04.802 ] 00:16:04.802 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.802 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:04.802 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:04.802 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:04.802 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:04.802 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.802 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:04.802 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.802 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.802 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.802 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.802 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.802 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.802 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.802 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.802 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.802 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.802 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.062 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.062 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.062 "name": "Existed_Raid", 00:16:05.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.062 "strip_size_kb": 64, 00:16:05.062 "state": "configuring", 00:16:05.062 "raid_level": "raid5f", 00:16:05.062 "superblock": false, 00:16:05.062 "num_base_bdevs": 4, 00:16:05.062 "num_base_bdevs_discovered": 2, 00:16:05.062 "num_base_bdevs_operational": 4, 00:16:05.062 "base_bdevs_list": [ 00:16:05.062 { 00:16:05.062 "name": "BaseBdev1", 00:16:05.062 "uuid": "8b3d43f6-20bc-4acb-b53a-d90da6cf9074", 00:16:05.062 "is_configured": true, 00:16:05.062 "data_offset": 0, 00:16:05.062 "data_size": 65536 00:16:05.062 }, 00:16:05.062 { 00:16:05.062 "name": "BaseBdev2", 00:16:05.062 "uuid": "1c1effe8-e90c-4bb8-9e4e-a0116f576203", 00:16:05.062 "is_configured": true, 00:16:05.062 "data_offset": 0, 00:16:05.062 "data_size": 65536 00:16:05.062 }, 00:16:05.062 { 00:16:05.062 "name": "BaseBdev3", 00:16:05.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.062 "is_configured": false, 00:16:05.062 "data_offset": 0, 00:16:05.062 "data_size": 0 00:16:05.062 }, 00:16:05.062 { 00:16:05.062 "name": "BaseBdev4", 00:16:05.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.062 "is_configured": false, 00:16:05.062 "data_offset": 0, 00:16:05.062 "data_size": 0 00:16:05.062 } 00:16:05.062 ] 00:16:05.062 }' 00:16:05.062 11:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.062 11:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.322 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:05.322 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.322 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.322 [2024-11-15 11:01:12.228279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:05.322 BaseBdev3 00:16:05.322 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.322 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:05.322 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:05.322 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:05.322 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:05.322 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:05.322 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:05.322 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:05.322 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.322 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.322 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.322 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:05.322 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.322 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.581 [ 00:16:05.581 { 00:16:05.581 "name": "BaseBdev3", 00:16:05.582 "aliases": [ 00:16:05.582 "3ebab9e6-5289-465f-8135-2b63b43ff78a" 00:16:05.582 ], 00:16:05.582 "product_name": "Malloc disk", 00:16:05.582 "block_size": 512, 00:16:05.582 "num_blocks": 65536, 00:16:05.582 "uuid": "3ebab9e6-5289-465f-8135-2b63b43ff78a", 00:16:05.582 "assigned_rate_limits": { 00:16:05.582 "rw_ios_per_sec": 0, 00:16:05.582 "rw_mbytes_per_sec": 0, 00:16:05.582 "r_mbytes_per_sec": 0, 00:16:05.582 "w_mbytes_per_sec": 0 00:16:05.582 }, 00:16:05.582 "claimed": true, 00:16:05.582 "claim_type": "exclusive_write", 00:16:05.582 "zoned": false, 00:16:05.582 "supported_io_types": { 00:16:05.582 "read": true, 00:16:05.582 "write": true, 00:16:05.582 "unmap": true, 00:16:05.582 "flush": true, 00:16:05.582 "reset": true, 00:16:05.582 "nvme_admin": false, 00:16:05.582 "nvme_io": false, 00:16:05.582 "nvme_io_md": false, 00:16:05.582 "write_zeroes": true, 00:16:05.582 "zcopy": true, 00:16:05.582 "get_zone_info": false, 00:16:05.582 "zone_management": false, 00:16:05.582 "zone_append": false, 00:16:05.582 "compare": false, 00:16:05.582 "compare_and_write": false, 00:16:05.582 "abort": true, 00:16:05.582 "seek_hole": false, 00:16:05.582 "seek_data": false, 00:16:05.582 "copy": true, 00:16:05.582 "nvme_iov_md": false 00:16:05.582 }, 00:16:05.582 "memory_domains": [ 00:16:05.582 { 00:16:05.582 "dma_device_id": "system", 00:16:05.582 "dma_device_type": 1 00:16:05.582 }, 00:16:05.582 { 00:16:05.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.582 "dma_device_type": 2 00:16:05.582 } 00:16:05.582 ], 00:16:05.582 "driver_specific": {} 00:16:05.582 } 00:16:05.582 ] 00:16:05.582 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.582 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:05.582 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:05.582 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:05.582 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:05.582 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:05.582 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.582 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.582 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.582 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:05.582 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.582 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.582 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.582 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.582 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.582 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.582 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.582 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.582 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.582 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.582 "name": "Existed_Raid", 00:16:05.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.582 "strip_size_kb": 64, 00:16:05.582 "state": "configuring", 00:16:05.582 "raid_level": "raid5f", 00:16:05.582 "superblock": false, 00:16:05.582 "num_base_bdevs": 4, 00:16:05.582 "num_base_bdevs_discovered": 3, 00:16:05.582 "num_base_bdevs_operational": 4, 00:16:05.582 "base_bdevs_list": [ 00:16:05.582 { 00:16:05.582 "name": "BaseBdev1", 00:16:05.582 "uuid": "8b3d43f6-20bc-4acb-b53a-d90da6cf9074", 00:16:05.582 "is_configured": true, 00:16:05.582 "data_offset": 0, 00:16:05.582 "data_size": 65536 00:16:05.582 }, 00:16:05.582 { 00:16:05.582 "name": "BaseBdev2", 00:16:05.582 "uuid": "1c1effe8-e90c-4bb8-9e4e-a0116f576203", 00:16:05.582 "is_configured": true, 00:16:05.582 "data_offset": 0, 00:16:05.582 "data_size": 65536 00:16:05.582 }, 00:16:05.582 { 00:16:05.582 "name": "BaseBdev3", 00:16:05.582 "uuid": "3ebab9e6-5289-465f-8135-2b63b43ff78a", 00:16:05.582 "is_configured": true, 00:16:05.582 "data_offset": 0, 00:16:05.582 "data_size": 65536 00:16:05.582 }, 00:16:05.582 { 00:16:05.582 "name": "BaseBdev4", 00:16:05.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.582 "is_configured": false, 00:16:05.582 "data_offset": 0, 00:16:05.582 "data_size": 0 00:16:05.582 } 00:16:05.582 ] 00:16:05.582 }' 00:16:05.582 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.582 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.842 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:05.842 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.842 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.842 [2024-11-15 11:01:12.739355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:05.842 [2024-11-15 11:01:12.739423] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:05.842 [2024-11-15 11:01:12.739432] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:05.842 [2024-11-15 11:01:12.739683] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:05.842 [2024-11-15 11:01:12.747261] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:05.842 [2024-11-15 11:01:12.747285] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:05.842 [2024-11-15 11:01:12.747550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.842 BaseBdev4 00:16:05.842 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.842 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:05.842 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:16:05.842 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:05.842 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:05.842 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:05.842 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:05.842 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:05.842 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.842 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.842 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.842 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:05.842 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.842 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.102 [ 00:16:06.102 { 00:16:06.102 "name": "BaseBdev4", 00:16:06.102 "aliases": [ 00:16:06.102 "8fb9e1bf-34de-4e81-bcbf-859c996c8d7b" 00:16:06.102 ], 00:16:06.102 "product_name": "Malloc disk", 00:16:06.102 "block_size": 512, 00:16:06.102 "num_blocks": 65536, 00:16:06.102 "uuid": "8fb9e1bf-34de-4e81-bcbf-859c996c8d7b", 00:16:06.102 "assigned_rate_limits": { 00:16:06.102 "rw_ios_per_sec": 0, 00:16:06.102 "rw_mbytes_per_sec": 0, 00:16:06.102 "r_mbytes_per_sec": 0, 00:16:06.102 "w_mbytes_per_sec": 0 00:16:06.102 }, 00:16:06.102 "claimed": true, 00:16:06.102 "claim_type": "exclusive_write", 00:16:06.102 "zoned": false, 00:16:06.102 "supported_io_types": { 00:16:06.102 "read": true, 00:16:06.102 "write": true, 00:16:06.102 "unmap": true, 00:16:06.102 "flush": true, 00:16:06.102 "reset": true, 00:16:06.102 "nvme_admin": false, 00:16:06.102 "nvme_io": false, 00:16:06.102 "nvme_io_md": false, 00:16:06.102 "write_zeroes": true, 00:16:06.102 "zcopy": true, 00:16:06.102 "get_zone_info": false, 00:16:06.102 "zone_management": false, 00:16:06.102 "zone_append": false, 00:16:06.102 "compare": false, 00:16:06.102 "compare_and_write": false, 00:16:06.102 "abort": true, 00:16:06.102 "seek_hole": false, 00:16:06.102 "seek_data": false, 00:16:06.102 "copy": true, 00:16:06.102 "nvme_iov_md": false 00:16:06.102 }, 00:16:06.102 "memory_domains": [ 00:16:06.102 { 00:16:06.102 "dma_device_id": "system", 00:16:06.102 "dma_device_type": 1 00:16:06.102 }, 00:16:06.102 { 00:16:06.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.102 "dma_device_type": 2 00:16:06.102 } 00:16:06.102 ], 00:16:06.102 "driver_specific": {} 00:16:06.102 } 00:16:06.102 ] 00:16:06.102 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.102 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:06.102 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:06.102 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:06.102 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:06.102 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.102 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.102 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.102 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.102 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:06.103 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.103 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.103 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.103 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.103 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.103 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.103 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.103 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.103 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.103 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.103 "name": "Existed_Raid", 00:16:06.103 "uuid": "00c2a3a0-ece6-4c15-b602-98ae992909e4", 00:16:06.103 "strip_size_kb": 64, 00:16:06.103 "state": "online", 00:16:06.103 "raid_level": "raid5f", 00:16:06.103 "superblock": false, 00:16:06.103 "num_base_bdevs": 4, 00:16:06.103 "num_base_bdevs_discovered": 4, 00:16:06.103 "num_base_bdevs_operational": 4, 00:16:06.103 "base_bdevs_list": [ 00:16:06.103 { 00:16:06.103 "name": "BaseBdev1", 00:16:06.103 "uuid": "8b3d43f6-20bc-4acb-b53a-d90da6cf9074", 00:16:06.103 "is_configured": true, 00:16:06.103 "data_offset": 0, 00:16:06.103 "data_size": 65536 00:16:06.103 }, 00:16:06.103 { 00:16:06.103 "name": "BaseBdev2", 00:16:06.103 "uuid": "1c1effe8-e90c-4bb8-9e4e-a0116f576203", 00:16:06.103 "is_configured": true, 00:16:06.103 "data_offset": 0, 00:16:06.103 "data_size": 65536 00:16:06.103 }, 00:16:06.103 { 00:16:06.103 "name": "BaseBdev3", 00:16:06.103 "uuid": "3ebab9e6-5289-465f-8135-2b63b43ff78a", 00:16:06.103 "is_configured": true, 00:16:06.103 "data_offset": 0, 00:16:06.103 "data_size": 65536 00:16:06.103 }, 00:16:06.103 { 00:16:06.103 "name": "BaseBdev4", 00:16:06.103 "uuid": "8fb9e1bf-34de-4e81-bcbf-859c996c8d7b", 00:16:06.103 "is_configured": true, 00:16:06.103 "data_offset": 0, 00:16:06.103 "data_size": 65536 00:16:06.103 } 00:16:06.103 ] 00:16:06.103 }' 00:16:06.103 11:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.103 11:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.363 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:06.363 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:06.363 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:06.363 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:06.363 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:06.363 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:06.363 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:06.363 11:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.363 11:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.363 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:06.363 [2024-11-15 11:01:13.183480] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:06.363 11:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.363 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:06.363 "name": "Existed_Raid", 00:16:06.363 "aliases": [ 00:16:06.363 "00c2a3a0-ece6-4c15-b602-98ae992909e4" 00:16:06.363 ], 00:16:06.363 "product_name": "Raid Volume", 00:16:06.363 "block_size": 512, 00:16:06.363 "num_blocks": 196608, 00:16:06.363 "uuid": "00c2a3a0-ece6-4c15-b602-98ae992909e4", 00:16:06.363 "assigned_rate_limits": { 00:16:06.363 "rw_ios_per_sec": 0, 00:16:06.363 "rw_mbytes_per_sec": 0, 00:16:06.363 "r_mbytes_per_sec": 0, 00:16:06.363 "w_mbytes_per_sec": 0 00:16:06.363 }, 00:16:06.363 "claimed": false, 00:16:06.363 "zoned": false, 00:16:06.363 "supported_io_types": { 00:16:06.363 "read": true, 00:16:06.363 "write": true, 00:16:06.363 "unmap": false, 00:16:06.363 "flush": false, 00:16:06.363 "reset": true, 00:16:06.363 "nvme_admin": false, 00:16:06.363 "nvme_io": false, 00:16:06.363 "nvme_io_md": false, 00:16:06.363 "write_zeroes": true, 00:16:06.363 "zcopy": false, 00:16:06.363 "get_zone_info": false, 00:16:06.363 "zone_management": false, 00:16:06.363 "zone_append": false, 00:16:06.363 "compare": false, 00:16:06.363 "compare_and_write": false, 00:16:06.363 "abort": false, 00:16:06.363 "seek_hole": false, 00:16:06.363 "seek_data": false, 00:16:06.363 "copy": false, 00:16:06.363 "nvme_iov_md": false 00:16:06.363 }, 00:16:06.363 "driver_specific": { 00:16:06.363 "raid": { 00:16:06.363 "uuid": "00c2a3a0-ece6-4c15-b602-98ae992909e4", 00:16:06.363 "strip_size_kb": 64, 00:16:06.363 "state": "online", 00:16:06.363 "raid_level": "raid5f", 00:16:06.363 "superblock": false, 00:16:06.363 "num_base_bdevs": 4, 00:16:06.363 "num_base_bdevs_discovered": 4, 00:16:06.363 "num_base_bdevs_operational": 4, 00:16:06.363 "base_bdevs_list": [ 00:16:06.363 { 00:16:06.363 "name": "BaseBdev1", 00:16:06.363 "uuid": "8b3d43f6-20bc-4acb-b53a-d90da6cf9074", 00:16:06.363 "is_configured": true, 00:16:06.363 "data_offset": 0, 00:16:06.363 "data_size": 65536 00:16:06.363 }, 00:16:06.363 { 00:16:06.363 "name": "BaseBdev2", 00:16:06.363 "uuid": "1c1effe8-e90c-4bb8-9e4e-a0116f576203", 00:16:06.363 "is_configured": true, 00:16:06.363 "data_offset": 0, 00:16:06.363 "data_size": 65536 00:16:06.363 }, 00:16:06.363 { 00:16:06.363 "name": "BaseBdev3", 00:16:06.363 "uuid": "3ebab9e6-5289-465f-8135-2b63b43ff78a", 00:16:06.363 "is_configured": true, 00:16:06.363 "data_offset": 0, 00:16:06.363 "data_size": 65536 00:16:06.363 }, 00:16:06.363 { 00:16:06.363 "name": "BaseBdev4", 00:16:06.363 "uuid": "8fb9e1bf-34de-4e81-bcbf-859c996c8d7b", 00:16:06.363 "is_configured": true, 00:16:06.363 "data_offset": 0, 00:16:06.363 "data_size": 65536 00:16:06.363 } 00:16:06.363 ] 00:16:06.363 } 00:16:06.363 } 00:16:06.363 }' 00:16:06.363 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:06.363 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:06.363 BaseBdev2 00:16:06.363 BaseBdev3 00:16:06.363 BaseBdev4' 00:16:06.363 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.623 11:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.623 [2024-11-15 11:01:13.538700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:06.883 11:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.883 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:06.883 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:06.883 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:06.883 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:06.883 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:06.883 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:06.883 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.883 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.883 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.883 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.883 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:06.883 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.883 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.884 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.884 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.884 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.884 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.884 11:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.884 11:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.884 11:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.884 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.884 "name": "Existed_Raid", 00:16:06.884 "uuid": "00c2a3a0-ece6-4c15-b602-98ae992909e4", 00:16:06.884 "strip_size_kb": 64, 00:16:06.884 "state": "online", 00:16:06.884 "raid_level": "raid5f", 00:16:06.884 "superblock": false, 00:16:06.884 "num_base_bdevs": 4, 00:16:06.884 "num_base_bdevs_discovered": 3, 00:16:06.884 "num_base_bdevs_operational": 3, 00:16:06.884 "base_bdevs_list": [ 00:16:06.884 { 00:16:06.884 "name": null, 00:16:06.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.884 "is_configured": false, 00:16:06.884 "data_offset": 0, 00:16:06.884 "data_size": 65536 00:16:06.884 }, 00:16:06.884 { 00:16:06.884 "name": "BaseBdev2", 00:16:06.884 "uuid": "1c1effe8-e90c-4bb8-9e4e-a0116f576203", 00:16:06.884 "is_configured": true, 00:16:06.884 "data_offset": 0, 00:16:06.884 "data_size": 65536 00:16:06.884 }, 00:16:06.884 { 00:16:06.884 "name": "BaseBdev3", 00:16:06.884 "uuid": "3ebab9e6-5289-465f-8135-2b63b43ff78a", 00:16:06.884 "is_configured": true, 00:16:06.884 "data_offset": 0, 00:16:06.884 "data_size": 65536 00:16:06.884 }, 00:16:06.884 { 00:16:06.884 "name": "BaseBdev4", 00:16:06.884 "uuid": "8fb9e1bf-34de-4e81-bcbf-859c996c8d7b", 00:16:06.884 "is_configured": true, 00:16:06.884 "data_offset": 0, 00:16:06.884 "data_size": 65536 00:16:06.884 } 00:16:06.884 ] 00:16:06.884 }' 00:16:06.884 11:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.884 11:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.453 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:07.453 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:07.453 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.453 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:07.453 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.453 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.453 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.453 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:07.453 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:07.453 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:07.453 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.453 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.453 [2024-11-15 11:01:14.176554] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:07.453 [2024-11-15 11:01:14.176660] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:07.453 [2024-11-15 11:01:14.274661] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:07.453 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.453 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:07.453 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:07.453 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.453 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.453 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:07.453 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.453 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.453 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:07.453 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:07.453 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:07.453 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.453 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.453 [2024-11-15 11:01:14.350567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:07.713 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.713 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:07.713 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:07.713 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.713 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:07.713 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.713 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.713 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.714 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:07.714 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:07.714 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:07.714 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.714 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.714 [2024-11-15 11:01:14.505262] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:07.714 [2024-11-15 11:01:14.505389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:07.714 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.714 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:07.714 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:07.714 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.714 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.714 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:07.714 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.714 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.002 BaseBdev2 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.002 [ 00:16:08.002 { 00:16:08.002 "name": "BaseBdev2", 00:16:08.002 "aliases": [ 00:16:08.002 "923314b8-2e7f-4a81-86d6-8aeb8e15640b" 00:16:08.002 ], 00:16:08.002 "product_name": "Malloc disk", 00:16:08.002 "block_size": 512, 00:16:08.002 "num_blocks": 65536, 00:16:08.002 "uuid": "923314b8-2e7f-4a81-86d6-8aeb8e15640b", 00:16:08.002 "assigned_rate_limits": { 00:16:08.002 "rw_ios_per_sec": 0, 00:16:08.002 "rw_mbytes_per_sec": 0, 00:16:08.002 "r_mbytes_per_sec": 0, 00:16:08.002 "w_mbytes_per_sec": 0 00:16:08.002 }, 00:16:08.002 "claimed": false, 00:16:08.002 "zoned": false, 00:16:08.002 "supported_io_types": { 00:16:08.002 "read": true, 00:16:08.002 "write": true, 00:16:08.002 "unmap": true, 00:16:08.002 "flush": true, 00:16:08.002 "reset": true, 00:16:08.002 "nvme_admin": false, 00:16:08.002 "nvme_io": false, 00:16:08.002 "nvme_io_md": false, 00:16:08.002 "write_zeroes": true, 00:16:08.002 "zcopy": true, 00:16:08.002 "get_zone_info": false, 00:16:08.002 "zone_management": false, 00:16:08.002 "zone_append": false, 00:16:08.002 "compare": false, 00:16:08.002 "compare_and_write": false, 00:16:08.002 "abort": true, 00:16:08.002 "seek_hole": false, 00:16:08.002 "seek_data": false, 00:16:08.002 "copy": true, 00:16:08.002 "nvme_iov_md": false 00:16:08.002 }, 00:16:08.002 "memory_domains": [ 00:16:08.002 { 00:16:08.002 "dma_device_id": "system", 00:16:08.002 "dma_device_type": 1 00:16:08.002 }, 00:16:08.002 { 00:16:08.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.002 "dma_device_type": 2 00:16:08.002 } 00:16:08.002 ], 00:16:08.002 "driver_specific": {} 00:16:08.002 } 00:16:08.002 ] 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.002 BaseBdev3 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.002 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.002 [ 00:16:08.002 { 00:16:08.002 "name": "BaseBdev3", 00:16:08.002 "aliases": [ 00:16:08.002 "74ee9162-c52c-4ee8-85e2-3df4200dba25" 00:16:08.002 ], 00:16:08.002 "product_name": "Malloc disk", 00:16:08.002 "block_size": 512, 00:16:08.002 "num_blocks": 65536, 00:16:08.002 "uuid": "74ee9162-c52c-4ee8-85e2-3df4200dba25", 00:16:08.002 "assigned_rate_limits": { 00:16:08.002 "rw_ios_per_sec": 0, 00:16:08.002 "rw_mbytes_per_sec": 0, 00:16:08.002 "r_mbytes_per_sec": 0, 00:16:08.002 "w_mbytes_per_sec": 0 00:16:08.002 }, 00:16:08.002 "claimed": false, 00:16:08.002 "zoned": false, 00:16:08.002 "supported_io_types": { 00:16:08.002 "read": true, 00:16:08.002 "write": true, 00:16:08.002 "unmap": true, 00:16:08.002 "flush": true, 00:16:08.002 "reset": true, 00:16:08.002 "nvme_admin": false, 00:16:08.002 "nvme_io": false, 00:16:08.002 "nvme_io_md": false, 00:16:08.002 "write_zeroes": true, 00:16:08.002 "zcopy": true, 00:16:08.002 "get_zone_info": false, 00:16:08.002 "zone_management": false, 00:16:08.002 "zone_append": false, 00:16:08.002 "compare": false, 00:16:08.002 "compare_and_write": false, 00:16:08.002 "abort": true, 00:16:08.002 "seek_hole": false, 00:16:08.002 "seek_data": false, 00:16:08.002 "copy": true, 00:16:08.002 "nvme_iov_md": false 00:16:08.002 }, 00:16:08.002 "memory_domains": [ 00:16:08.002 { 00:16:08.002 "dma_device_id": "system", 00:16:08.002 "dma_device_type": 1 00:16:08.002 }, 00:16:08.002 { 00:16:08.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.002 "dma_device_type": 2 00:16:08.002 } 00:16:08.002 ], 00:16:08.003 "driver_specific": {} 00:16:08.003 } 00:16:08.003 ] 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.003 BaseBdev4 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.003 [ 00:16:08.003 { 00:16:08.003 "name": "BaseBdev4", 00:16:08.003 "aliases": [ 00:16:08.003 "116ac4f7-0e95-4206-a7e1-4a9f87a6b9cd" 00:16:08.003 ], 00:16:08.003 "product_name": "Malloc disk", 00:16:08.003 "block_size": 512, 00:16:08.003 "num_blocks": 65536, 00:16:08.003 "uuid": "116ac4f7-0e95-4206-a7e1-4a9f87a6b9cd", 00:16:08.003 "assigned_rate_limits": { 00:16:08.003 "rw_ios_per_sec": 0, 00:16:08.003 "rw_mbytes_per_sec": 0, 00:16:08.003 "r_mbytes_per_sec": 0, 00:16:08.003 "w_mbytes_per_sec": 0 00:16:08.003 }, 00:16:08.003 "claimed": false, 00:16:08.003 "zoned": false, 00:16:08.003 "supported_io_types": { 00:16:08.003 "read": true, 00:16:08.003 "write": true, 00:16:08.003 "unmap": true, 00:16:08.003 "flush": true, 00:16:08.003 "reset": true, 00:16:08.003 "nvme_admin": false, 00:16:08.003 "nvme_io": false, 00:16:08.003 "nvme_io_md": false, 00:16:08.003 "write_zeroes": true, 00:16:08.003 "zcopy": true, 00:16:08.003 "get_zone_info": false, 00:16:08.003 "zone_management": false, 00:16:08.003 "zone_append": false, 00:16:08.003 "compare": false, 00:16:08.003 "compare_and_write": false, 00:16:08.003 "abort": true, 00:16:08.003 "seek_hole": false, 00:16:08.003 "seek_data": false, 00:16:08.003 "copy": true, 00:16:08.003 "nvme_iov_md": false 00:16:08.003 }, 00:16:08.003 "memory_domains": [ 00:16:08.003 { 00:16:08.003 "dma_device_id": "system", 00:16:08.003 "dma_device_type": 1 00:16:08.003 }, 00:16:08.003 { 00:16:08.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.003 "dma_device_type": 2 00:16:08.003 } 00:16:08.003 ], 00:16:08.003 "driver_specific": {} 00:16:08.003 } 00:16:08.003 ] 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.003 [2024-11-15 11:01:14.894180] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:08.003 [2024-11-15 11:01:14.894267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:08.003 [2024-11-15 11:01:14.894333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:08.003 [2024-11-15 11:01:14.896094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:08.003 [2024-11-15 11:01:14.896186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.003 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.261 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.261 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.261 "name": "Existed_Raid", 00:16:08.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.261 "strip_size_kb": 64, 00:16:08.261 "state": "configuring", 00:16:08.261 "raid_level": "raid5f", 00:16:08.261 "superblock": false, 00:16:08.261 "num_base_bdevs": 4, 00:16:08.261 "num_base_bdevs_discovered": 3, 00:16:08.261 "num_base_bdevs_operational": 4, 00:16:08.261 "base_bdevs_list": [ 00:16:08.261 { 00:16:08.262 "name": "BaseBdev1", 00:16:08.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.262 "is_configured": false, 00:16:08.262 "data_offset": 0, 00:16:08.262 "data_size": 0 00:16:08.262 }, 00:16:08.262 { 00:16:08.262 "name": "BaseBdev2", 00:16:08.262 "uuid": "923314b8-2e7f-4a81-86d6-8aeb8e15640b", 00:16:08.262 "is_configured": true, 00:16:08.262 "data_offset": 0, 00:16:08.262 "data_size": 65536 00:16:08.262 }, 00:16:08.262 { 00:16:08.262 "name": "BaseBdev3", 00:16:08.262 "uuid": "74ee9162-c52c-4ee8-85e2-3df4200dba25", 00:16:08.262 "is_configured": true, 00:16:08.262 "data_offset": 0, 00:16:08.262 "data_size": 65536 00:16:08.262 }, 00:16:08.262 { 00:16:08.262 "name": "BaseBdev4", 00:16:08.262 "uuid": "116ac4f7-0e95-4206-a7e1-4a9f87a6b9cd", 00:16:08.262 "is_configured": true, 00:16:08.262 "data_offset": 0, 00:16:08.262 "data_size": 65536 00:16:08.262 } 00:16:08.262 ] 00:16:08.262 }' 00:16:08.262 11:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.262 11:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.521 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:08.521 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.521 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.521 [2024-11-15 11:01:15.357420] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:08.521 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.521 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:08.521 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:08.521 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:08.521 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.521 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.521 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:08.521 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.521 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.521 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.521 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.521 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.521 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.521 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.521 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.521 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.521 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.521 "name": "Existed_Raid", 00:16:08.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.521 "strip_size_kb": 64, 00:16:08.521 "state": "configuring", 00:16:08.521 "raid_level": "raid5f", 00:16:08.521 "superblock": false, 00:16:08.521 "num_base_bdevs": 4, 00:16:08.521 "num_base_bdevs_discovered": 2, 00:16:08.521 "num_base_bdevs_operational": 4, 00:16:08.521 "base_bdevs_list": [ 00:16:08.521 { 00:16:08.521 "name": "BaseBdev1", 00:16:08.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.521 "is_configured": false, 00:16:08.521 "data_offset": 0, 00:16:08.521 "data_size": 0 00:16:08.521 }, 00:16:08.521 { 00:16:08.521 "name": null, 00:16:08.521 "uuid": "923314b8-2e7f-4a81-86d6-8aeb8e15640b", 00:16:08.521 "is_configured": false, 00:16:08.521 "data_offset": 0, 00:16:08.521 "data_size": 65536 00:16:08.521 }, 00:16:08.521 { 00:16:08.521 "name": "BaseBdev3", 00:16:08.521 "uuid": "74ee9162-c52c-4ee8-85e2-3df4200dba25", 00:16:08.521 "is_configured": true, 00:16:08.521 "data_offset": 0, 00:16:08.521 "data_size": 65536 00:16:08.521 }, 00:16:08.521 { 00:16:08.521 "name": "BaseBdev4", 00:16:08.521 "uuid": "116ac4f7-0e95-4206-a7e1-4a9f87a6b9cd", 00:16:08.521 "is_configured": true, 00:16:08.521 "data_offset": 0, 00:16:08.521 "data_size": 65536 00:16:08.521 } 00:16:08.521 ] 00:16:08.521 }' 00:16:08.521 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.521 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.091 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:09.091 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.091 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.091 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.091 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.091 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:09.091 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:09.091 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.091 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.091 [2024-11-15 11:01:15.872128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.091 BaseBdev1 00:16:09.091 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.091 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:09.091 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:09.091 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:09.091 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:09.091 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:09.091 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:09.091 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:09.091 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.091 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.091 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.091 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:09.091 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.091 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.091 [ 00:16:09.091 { 00:16:09.091 "name": "BaseBdev1", 00:16:09.091 "aliases": [ 00:16:09.091 "f4bc9b7a-4c86-4fae-8559-6b9c1176f90c" 00:16:09.091 ], 00:16:09.091 "product_name": "Malloc disk", 00:16:09.091 "block_size": 512, 00:16:09.091 "num_blocks": 65536, 00:16:09.091 "uuid": "f4bc9b7a-4c86-4fae-8559-6b9c1176f90c", 00:16:09.091 "assigned_rate_limits": { 00:16:09.091 "rw_ios_per_sec": 0, 00:16:09.091 "rw_mbytes_per_sec": 0, 00:16:09.091 "r_mbytes_per_sec": 0, 00:16:09.091 "w_mbytes_per_sec": 0 00:16:09.091 }, 00:16:09.091 "claimed": true, 00:16:09.091 "claim_type": "exclusive_write", 00:16:09.091 "zoned": false, 00:16:09.091 "supported_io_types": { 00:16:09.091 "read": true, 00:16:09.091 "write": true, 00:16:09.091 "unmap": true, 00:16:09.091 "flush": true, 00:16:09.091 "reset": true, 00:16:09.091 "nvme_admin": false, 00:16:09.091 "nvme_io": false, 00:16:09.091 "nvme_io_md": false, 00:16:09.092 "write_zeroes": true, 00:16:09.092 "zcopy": true, 00:16:09.092 "get_zone_info": false, 00:16:09.092 "zone_management": false, 00:16:09.092 "zone_append": false, 00:16:09.092 "compare": false, 00:16:09.092 "compare_and_write": false, 00:16:09.092 "abort": true, 00:16:09.092 "seek_hole": false, 00:16:09.092 "seek_data": false, 00:16:09.092 "copy": true, 00:16:09.092 "nvme_iov_md": false 00:16:09.092 }, 00:16:09.092 "memory_domains": [ 00:16:09.092 { 00:16:09.092 "dma_device_id": "system", 00:16:09.092 "dma_device_type": 1 00:16:09.092 }, 00:16:09.092 { 00:16:09.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.092 "dma_device_type": 2 00:16:09.092 } 00:16:09.092 ], 00:16:09.092 "driver_specific": {} 00:16:09.092 } 00:16:09.092 ] 00:16:09.092 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.092 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:09.092 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:09.092 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.092 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.092 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.092 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.092 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.092 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.092 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.092 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.092 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.092 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.092 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.092 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.092 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.092 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.092 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.092 "name": "Existed_Raid", 00:16:09.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.092 "strip_size_kb": 64, 00:16:09.092 "state": "configuring", 00:16:09.092 "raid_level": "raid5f", 00:16:09.092 "superblock": false, 00:16:09.092 "num_base_bdevs": 4, 00:16:09.092 "num_base_bdevs_discovered": 3, 00:16:09.092 "num_base_bdevs_operational": 4, 00:16:09.092 "base_bdevs_list": [ 00:16:09.092 { 00:16:09.092 "name": "BaseBdev1", 00:16:09.092 "uuid": "f4bc9b7a-4c86-4fae-8559-6b9c1176f90c", 00:16:09.092 "is_configured": true, 00:16:09.092 "data_offset": 0, 00:16:09.092 "data_size": 65536 00:16:09.092 }, 00:16:09.092 { 00:16:09.092 "name": null, 00:16:09.092 "uuid": "923314b8-2e7f-4a81-86d6-8aeb8e15640b", 00:16:09.092 "is_configured": false, 00:16:09.092 "data_offset": 0, 00:16:09.092 "data_size": 65536 00:16:09.092 }, 00:16:09.092 { 00:16:09.092 "name": "BaseBdev3", 00:16:09.092 "uuid": "74ee9162-c52c-4ee8-85e2-3df4200dba25", 00:16:09.092 "is_configured": true, 00:16:09.092 "data_offset": 0, 00:16:09.092 "data_size": 65536 00:16:09.092 }, 00:16:09.092 { 00:16:09.092 "name": "BaseBdev4", 00:16:09.092 "uuid": "116ac4f7-0e95-4206-a7e1-4a9f87a6b9cd", 00:16:09.092 "is_configured": true, 00:16:09.092 "data_offset": 0, 00:16:09.092 "data_size": 65536 00:16:09.092 } 00:16:09.092 ] 00:16:09.092 }' 00:16:09.092 11:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.092 11:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.662 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.662 11:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.662 11:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.662 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:09.662 11:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.662 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:09.662 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:09.662 11:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.662 11:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.662 [2024-11-15 11:01:16.435243] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:09.662 11:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.662 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:09.662 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.662 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.662 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.662 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.662 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.662 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.662 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.662 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.662 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.662 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.662 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.662 11:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.662 11:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.662 11:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.662 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.662 "name": "Existed_Raid", 00:16:09.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.662 "strip_size_kb": 64, 00:16:09.662 "state": "configuring", 00:16:09.662 "raid_level": "raid5f", 00:16:09.662 "superblock": false, 00:16:09.662 "num_base_bdevs": 4, 00:16:09.662 "num_base_bdevs_discovered": 2, 00:16:09.662 "num_base_bdevs_operational": 4, 00:16:09.662 "base_bdevs_list": [ 00:16:09.662 { 00:16:09.662 "name": "BaseBdev1", 00:16:09.662 "uuid": "f4bc9b7a-4c86-4fae-8559-6b9c1176f90c", 00:16:09.662 "is_configured": true, 00:16:09.662 "data_offset": 0, 00:16:09.662 "data_size": 65536 00:16:09.662 }, 00:16:09.662 { 00:16:09.662 "name": null, 00:16:09.662 "uuid": "923314b8-2e7f-4a81-86d6-8aeb8e15640b", 00:16:09.662 "is_configured": false, 00:16:09.662 "data_offset": 0, 00:16:09.662 "data_size": 65536 00:16:09.662 }, 00:16:09.662 { 00:16:09.662 "name": null, 00:16:09.662 "uuid": "74ee9162-c52c-4ee8-85e2-3df4200dba25", 00:16:09.662 "is_configured": false, 00:16:09.662 "data_offset": 0, 00:16:09.662 "data_size": 65536 00:16:09.662 }, 00:16:09.662 { 00:16:09.662 "name": "BaseBdev4", 00:16:09.662 "uuid": "116ac4f7-0e95-4206-a7e1-4a9f87a6b9cd", 00:16:09.662 "is_configured": true, 00:16:09.662 "data_offset": 0, 00:16:09.662 "data_size": 65536 00:16:09.662 } 00:16:09.662 ] 00:16:09.662 }' 00:16:09.662 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.662 11:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.232 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.232 11:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.232 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:10.232 11:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.232 11:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.232 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:10.232 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:10.232 11:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.232 11:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.232 [2024-11-15 11:01:16.970322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:10.232 11:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.232 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:10.233 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:10.233 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.233 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.233 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.233 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:10.233 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.233 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.233 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.233 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.233 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.233 11:01:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.233 11:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.233 11:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.233 11:01:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.233 11:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.233 "name": "Existed_Raid", 00:16:10.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.233 "strip_size_kb": 64, 00:16:10.233 "state": "configuring", 00:16:10.233 "raid_level": "raid5f", 00:16:10.233 "superblock": false, 00:16:10.233 "num_base_bdevs": 4, 00:16:10.233 "num_base_bdevs_discovered": 3, 00:16:10.233 "num_base_bdevs_operational": 4, 00:16:10.233 "base_bdevs_list": [ 00:16:10.233 { 00:16:10.233 "name": "BaseBdev1", 00:16:10.233 "uuid": "f4bc9b7a-4c86-4fae-8559-6b9c1176f90c", 00:16:10.233 "is_configured": true, 00:16:10.233 "data_offset": 0, 00:16:10.233 "data_size": 65536 00:16:10.233 }, 00:16:10.233 { 00:16:10.233 "name": null, 00:16:10.233 "uuid": "923314b8-2e7f-4a81-86d6-8aeb8e15640b", 00:16:10.233 "is_configured": false, 00:16:10.233 "data_offset": 0, 00:16:10.233 "data_size": 65536 00:16:10.233 }, 00:16:10.233 { 00:16:10.233 "name": "BaseBdev3", 00:16:10.233 "uuid": "74ee9162-c52c-4ee8-85e2-3df4200dba25", 00:16:10.233 "is_configured": true, 00:16:10.233 "data_offset": 0, 00:16:10.233 "data_size": 65536 00:16:10.233 }, 00:16:10.233 { 00:16:10.233 "name": "BaseBdev4", 00:16:10.233 "uuid": "116ac4f7-0e95-4206-a7e1-4a9f87a6b9cd", 00:16:10.233 "is_configured": true, 00:16:10.233 "data_offset": 0, 00:16:10.233 "data_size": 65536 00:16:10.233 } 00:16:10.233 ] 00:16:10.233 }' 00:16:10.233 11:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.233 11:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.808 11:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.808 11:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:10.808 11:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.808 11:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.808 11:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.808 11:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:10.809 11:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:10.809 11:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.809 11:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.809 [2024-11-15 11:01:17.477486] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:10.809 11:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.809 11:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:10.809 11:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:10.809 11:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.809 11:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.809 11:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.809 11:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:10.809 11:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.809 11:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.809 11:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.809 11:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.809 11:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.809 11:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.809 11:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.809 11:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.809 11:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.809 11:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.809 "name": "Existed_Raid", 00:16:10.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.809 "strip_size_kb": 64, 00:16:10.809 "state": "configuring", 00:16:10.809 "raid_level": "raid5f", 00:16:10.809 "superblock": false, 00:16:10.809 "num_base_bdevs": 4, 00:16:10.809 "num_base_bdevs_discovered": 2, 00:16:10.809 "num_base_bdevs_operational": 4, 00:16:10.809 "base_bdevs_list": [ 00:16:10.809 { 00:16:10.809 "name": null, 00:16:10.809 "uuid": "f4bc9b7a-4c86-4fae-8559-6b9c1176f90c", 00:16:10.809 "is_configured": false, 00:16:10.809 "data_offset": 0, 00:16:10.809 "data_size": 65536 00:16:10.809 }, 00:16:10.809 { 00:16:10.809 "name": null, 00:16:10.809 "uuid": "923314b8-2e7f-4a81-86d6-8aeb8e15640b", 00:16:10.809 "is_configured": false, 00:16:10.809 "data_offset": 0, 00:16:10.809 "data_size": 65536 00:16:10.809 }, 00:16:10.809 { 00:16:10.809 "name": "BaseBdev3", 00:16:10.809 "uuid": "74ee9162-c52c-4ee8-85e2-3df4200dba25", 00:16:10.809 "is_configured": true, 00:16:10.809 "data_offset": 0, 00:16:10.809 "data_size": 65536 00:16:10.809 }, 00:16:10.809 { 00:16:10.809 "name": "BaseBdev4", 00:16:10.809 "uuid": "116ac4f7-0e95-4206-a7e1-4a9f87a6b9cd", 00:16:10.809 "is_configured": true, 00:16:10.809 "data_offset": 0, 00:16:10.809 "data_size": 65536 00:16:10.809 } 00:16:10.809 ] 00:16:10.809 }' 00:16:10.809 11:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.809 11:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.386 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.386 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:11.386 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.386 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.386 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.386 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:11.386 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:11.386 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.386 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.386 [2024-11-15 11:01:18.074934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:11.386 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.386 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:11.386 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.386 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:11.386 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.386 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.386 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:11.386 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.386 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.386 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.386 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.386 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.386 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.386 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.386 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.386 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.386 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.386 "name": "Existed_Raid", 00:16:11.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.386 "strip_size_kb": 64, 00:16:11.386 "state": "configuring", 00:16:11.386 "raid_level": "raid5f", 00:16:11.386 "superblock": false, 00:16:11.386 "num_base_bdevs": 4, 00:16:11.386 "num_base_bdevs_discovered": 3, 00:16:11.386 "num_base_bdevs_operational": 4, 00:16:11.386 "base_bdevs_list": [ 00:16:11.386 { 00:16:11.386 "name": null, 00:16:11.386 "uuid": "f4bc9b7a-4c86-4fae-8559-6b9c1176f90c", 00:16:11.386 "is_configured": false, 00:16:11.386 "data_offset": 0, 00:16:11.386 "data_size": 65536 00:16:11.386 }, 00:16:11.386 { 00:16:11.386 "name": "BaseBdev2", 00:16:11.386 "uuid": "923314b8-2e7f-4a81-86d6-8aeb8e15640b", 00:16:11.386 "is_configured": true, 00:16:11.386 "data_offset": 0, 00:16:11.386 "data_size": 65536 00:16:11.386 }, 00:16:11.386 { 00:16:11.386 "name": "BaseBdev3", 00:16:11.386 "uuid": "74ee9162-c52c-4ee8-85e2-3df4200dba25", 00:16:11.386 "is_configured": true, 00:16:11.386 "data_offset": 0, 00:16:11.386 "data_size": 65536 00:16:11.386 }, 00:16:11.386 { 00:16:11.386 "name": "BaseBdev4", 00:16:11.386 "uuid": "116ac4f7-0e95-4206-a7e1-4a9f87a6b9cd", 00:16:11.386 "is_configured": true, 00:16:11.386 "data_offset": 0, 00:16:11.386 "data_size": 65536 00:16:11.386 } 00:16:11.386 ] 00:16:11.386 }' 00:16:11.386 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.386 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.645 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.645 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:11.645 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.645 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.645 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.645 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:11.645 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.645 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:11.645 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.645 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.645 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f4bc9b7a-4c86-4fae-8559-6b9c1176f90c 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.905 [2024-11-15 11:01:18.621809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:11.905 [2024-11-15 11:01:18.621861] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:11.905 [2024-11-15 11:01:18.621868] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:11.905 [2024-11-15 11:01:18.622101] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:11.905 [2024-11-15 11:01:18.629129] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:11.905 [2024-11-15 11:01:18.629199] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:11.905 [2024-11-15 11:01:18.629468] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.905 NewBaseBdev 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.905 [ 00:16:11.905 { 00:16:11.905 "name": "NewBaseBdev", 00:16:11.905 "aliases": [ 00:16:11.905 "f4bc9b7a-4c86-4fae-8559-6b9c1176f90c" 00:16:11.905 ], 00:16:11.905 "product_name": "Malloc disk", 00:16:11.905 "block_size": 512, 00:16:11.905 "num_blocks": 65536, 00:16:11.905 "uuid": "f4bc9b7a-4c86-4fae-8559-6b9c1176f90c", 00:16:11.905 "assigned_rate_limits": { 00:16:11.905 "rw_ios_per_sec": 0, 00:16:11.905 "rw_mbytes_per_sec": 0, 00:16:11.905 "r_mbytes_per_sec": 0, 00:16:11.905 "w_mbytes_per_sec": 0 00:16:11.905 }, 00:16:11.905 "claimed": true, 00:16:11.905 "claim_type": "exclusive_write", 00:16:11.905 "zoned": false, 00:16:11.905 "supported_io_types": { 00:16:11.905 "read": true, 00:16:11.905 "write": true, 00:16:11.905 "unmap": true, 00:16:11.905 "flush": true, 00:16:11.905 "reset": true, 00:16:11.905 "nvme_admin": false, 00:16:11.905 "nvme_io": false, 00:16:11.905 "nvme_io_md": false, 00:16:11.905 "write_zeroes": true, 00:16:11.905 "zcopy": true, 00:16:11.905 "get_zone_info": false, 00:16:11.905 "zone_management": false, 00:16:11.905 "zone_append": false, 00:16:11.905 "compare": false, 00:16:11.905 "compare_and_write": false, 00:16:11.905 "abort": true, 00:16:11.905 "seek_hole": false, 00:16:11.905 "seek_data": false, 00:16:11.905 "copy": true, 00:16:11.905 "nvme_iov_md": false 00:16:11.905 }, 00:16:11.905 "memory_domains": [ 00:16:11.905 { 00:16:11.905 "dma_device_id": "system", 00:16:11.905 "dma_device_type": 1 00:16:11.905 }, 00:16:11.905 { 00:16:11.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.905 "dma_device_type": 2 00:16:11.905 } 00:16:11.905 ], 00:16:11.905 "driver_specific": {} 00:16:11.905 } 00:16:11.905 ] 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.905 "name": "Existed_Raid", 00:16:11.905 "uuid": "b45afc5e-9d1d-40e5-92e3-4607b760964c", 00:16:11.905 "strip_size_kb": 64, 00:16:11.905 "state": "online", 00:16:11.905 "raid_level": "raid5f", 00:16:11.905 "superblock": false, 00:16:11.905 "num_base_bdevs": 4, 00:16:11.905 "num_base_bdevs_discovered": 4, 00:16:11.905 "num_base_bdevs_operational": 4, 00:16:11.905 "base_bdevs_list": [ 00:16:11.905 { 00:16:11.905 "name": "NewBaseBdev", 00:16:11.905 "uuid": "f4bc9b7a-4c86-4fae-8559-6b9c1176f90c", 00:16:11.905 "is_configured": true, 00:16:11.905 "data_offset": 0, 00:16:11.905 "data_size": 65536 00:16:11.905 }, 00:16:11.905 { 00:16:11.905 "name": "BaseBdev2", 00:16:11.905 "uuid": "923314b8-2e7f-4a81-86d6-8aeb8e15640b", 00:16:11.905 "is_configured": true, 00:16:11.905 "data_offset": 0, 00:16:11.905 "data_size": 65536 00:16:11.905 }, 00:16:11.905 { 00:16:11.905 "name": "BaseBdev3", 00:16:11.905 "uuid": "74ee9162-c52c-4ee8-85e2-3df4200dba25", 00:16:11.905 "is_configured": true, 00:16:11.905 "data_offset": 0, 00:16:11.905 "data_size": 65536 00:16:11.905 }, 00:16:11.905 { 00:16:11.905 "name": "BaseBdev4", 00:16:11.905 "uuid": "116ac4f7-0e95-4206-a7e1-4a9f87a6b9cd", 00:16:11.905 "is_configured": true, 00:16:11.905 "data_offset": 0, 00:16:11.905 "data_size": 65536 00:16:11.905 } 00:16:11.905 ] 00:16:11.905 }' 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.905 11:01:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.476 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:12.476 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:12.476 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:12.476 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:12.476 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:12.476 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:12.476 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:12.476 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:12.476 11:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.476 11:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.476 [2024-11-15 11:01:19.133068] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:12.476 11:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.476 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:12.476 "name": "Existed_Raid", 00:16:12.476 "aliases": [ 00:16:12.476 "b45afc5e-9d1d-40e5-92e3-4607b760964c" 00:16:12.476 ], 00:16:12.476 "product_name": "Raid Volume", 00:16:12.476 "block_size": 512, 00:16:12.476 "num_blocks": 196608, 00:16:12.477 "uuid": "b45afc5e-9d1d-40e5-92e3-4607b760964c", 00:16:12.477 "assigned_rate_limits": { 00:16:12.477 "rw_ios_per_sec": 0, 00:16:12.477 "rw_mbytes_per_sec": 0, 00:16:12.477 "r_mbytes_per_sec": 0, 00:16:12.477 "w_mbytes_per_sec": 0 00:16:12.477 }, 00:16:12.477 "claimed": false, 00:16:12.477 "zoned": false, 00:16:12.477 "supported_io_types": { 00:16:12.477 "read": true, 00:16:12.477 "write": true, 00:16:12.477 "unmap": false, 00:16:12.477 "flush": false, 00:16:12.477 "reset": true, 00:16:12.477 "nvme_admin": false, 00:16:12.477 "nvme_io": false, 00:16:12.477 "nvme_io_md": false, 00:16:12.477 "write_zeroes": true, 00:16:12.477 "zcopy": false, 00:16:12.477 "get_zone_info": false, 00:16:12.477 "zone_management": false, 00:16:12.477 "zone_append": false, 00:16:12.477 "compare": false, 00:16:12.477 "compare_and_write": false, 00:16:12.477 "abort": false, 00:16:12.477 "seek_hole": false, 00:16:12.477 "seek_data": false, 00:16:12.477 "copy": false, 00:16:12.477 "nvme_iov_md": false 00:16:12.477 }, 00:16:12.477 "driver_specific": { 00:16:12.477 "raid": { 00:16:12.477 "uuid": "b45afc5e-9d1d-40e5-92e3-4607b760964c", 00:16:12.477 "strip_size_kb": 64, 00:16:12.477 "state": "online", 00:16:12.477 "raid_level": "raid5f", 00:16:12.477 "superblock": false, 00:16:12.477 "num_base_bdevs": 4, 00:16:12.477 "num_base_bdevs_discovered": 4, 00:16:12.477 "num_base_bdevs_operational": 4, 00:16:12.477 "base_bdevs_list": [ 00:16:12.477 { 00:16:12.477 "name": "NewBaseBdev", 00:16:12.477 "uuid": "f4bc9b7a-4c86-4fae-8559-6b9c1176f90c", 00:16:12.477 "is_configured": true, 00:16:12.477 "data_offset": 0, 00:16:12.477 "data_size": 65536 00:16:12.477 }, 00:16:12.477 { 00:16:12.477 "name": "BaseBdev2", 00:16:12.477 "uuid": "923314b8-2e7f-4a81-86d6-8aeb8e15640b", 00:16:12.477 "is_configured": true, 00:16:12.477 "data_offset": 0, 00:16:12.477 "data_size": 65536 00:16:12.477 }, 00:16:12.477 { 00:16:12.477 "name": "BaseBdev3", 00:16:12.477 "uuid": "74ee9162-c52c-4ee8-85e2-3df4200dba25", 00:16:12.477 "is_configured": true, 00:16:12.477 "data_offset": 0, 00:16:12.477 "data_size": 65536 00:16:12.477 }, 00:16:12.477 { 00:16:12.477 "name": "BaseBdev4", 00:16:12.477 "uuid": "116ac4f7-0e95-4206-a7e1-4a9f87a6b9cd", 00:16:12.477 "is_configured": true, 00:16:12.477 "data_offset": 0, 00:16:12.477 "data_size": 65536 00:16:12.477 } 00:16:12.477 ] 00:16:12.477 } 00:16:12.477 } 00:16:12.477 }' 00:16:12.477 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:12.477 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:12.477 BaseBdev2 00:16:12.477 BaseBdev3 00:16:12.477 BaseBdev4' 00:16:12.477 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.477 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:12.477 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:12.477 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:12.477 11:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.477 11:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.477 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.477 11:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.477 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:12.477 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:12.477 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:12.477 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:12.477 11:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.477 11:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.477 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.477 11:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.477 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:12.477 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:12.477 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:12.477 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.477 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:12.477 11:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.477 11:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.477 11:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.737 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:12.737 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:12.737 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:12.737 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:12.737 11:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.737 11:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.737 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.737 11:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.737 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:12.737 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:12.737 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:12.737 11:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.737 11:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.737 [2024-11-15 11:01:19.484216] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:12.737 [2024-11-15 11:01:19.484246] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:12.737 [2024-11-15 11:01:19.484332] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:12.737 [2024-11-15 11:01:19.484651] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:12.737 [2024-11-15 11:01:19.484663] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:12.737 11:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.737 11:01:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82948 00:16:12.737 11:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 82948 ']' 00:16:12.737 11:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 82948 00:16:12.737 11:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:16:12.737 11:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:12.737 11:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82948 00:16:12.737 11:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:12.737 killing process with pid 82948 00:16:12.737 11:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:12.737 11:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82948' 00:16:12.737 11:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 82948 00:16:12.737 [2024-11-15 11:01:19.535377] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:12.737 11:01:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 82948 00:16:13.306 [2024-11-15 11:01:19.924224] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:14.244 11:01:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:14.244 00:16:14.244 real 0m11.791s 00:16:14.244 user 0m18.766s 00:16:14.244 sys 0m2.119s 00:16:14.244 11:01:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:14.244 11:01:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.244 ************************************ 00:16:14.244 END TEST raid5f_state_function_test 00:16:14.244 ************************************ 00:16:14.244 11:01:21 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:14.244 11:01:21 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:14.244 11:01:21 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:14.244 11:01:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:14.244 ************************************ 00:16:14.244 START TEST raid5f_state_function_test_sb 00:16:14.244 ************************************ 00:16:14.244 11:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 true 00:16:14.244 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:14.244 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:14.244 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:14.244 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83619 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83619' 00:16:14.245 Process raid pid: 83619 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83619 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 83619 ']' 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:14.245 11:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.504 [2024-11-15 11:01:21.173271] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:16:14.504 [2024-11-15 11:01:21.173416] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.504 [2024-11-15 11:01:21.346798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.764 [2024-11-15 11:01:21.460006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.764 [2024-11-15 11:01:21.658726] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.764 [2024-11-15 11:01:21.658763] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:15.333 11:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:15.333 11:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:16:15.333 11:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:15.333 11:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.333 11:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.333 [2024-11-15 11:01:22.003385] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:15.333 [2024-11-15 11:01:22.003435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:15.333 [2024-11-15 11:01:22.003449] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:15.333 [2024-11-15 11:01:22.003459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:15.333 [2024-11-15 11:01:22.003466] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:15.333 [2024-11-15 11:01:22.003474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:15.333 [2024-11-15 11:01:22.003479] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:15.333 [2024-11-15 11:01:22.003487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:15.333 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.333 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:15.333 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:15.333 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:15.333 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.333 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.333 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:15.333 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.333 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.333 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.333 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.333 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.333 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.333 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.333 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.333 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.333 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.333 "name": "Existed_Raid", 00:16:15.333 "uuid": "ac6c2887-83f9-493f-8186-4c0c174e28cb", 00:16:15.333 "strip_size_kb": 64, 00:16:15.333 "state": "configuring", 00:16:15.333 "raid_level": "raid5f", 00:16:15.333 "superblock": true, 00:16:15.333 "num_base_bdevs": 4, 00:16:15.333 "num_base_bdevs_discovered": 0, 00:16:15.333 "num_base_bdevs_operational": 4, 00:16:15.333 "base_bdevs_list": [ 00:16:15.333 { 00:16:15.333 "name": "BaseBdev1", 00:16:15.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.333 "is_configured": false, 00:16:15.333 "data_offset": 0, 00:16:15.333 "data_size": 0 00:16:15.333 }, 00:16:15.333 { 00:16:15.333 "name": "BaseBdev2", 00:16:15.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.333 "is_configured": false, 00:16:15.333 "data_offset": 0, 00:16:15.333 "data_size": 0 00:16:15.334 }, 00:16:15.334 { 00:16:15.334 "name": "BaseBdev3", 00:16:15.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.334 "is_configured": false, 00:16:15.334 "data_offset": 0, 00:16:15.334 "data_size": 0 00:16:15.334 }, 00:16:15.334 { 00:16:15.334 "name": "BaseBdev4", 00:16:15.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.334 "is_configured": false, 00:16:15.334 "data_offset": 0, 00:16:15.334 "data_size": 0 00:16:15.334 } 00:16:15.334 ] 00:16:15.334 }' 00:16:15.334 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.334 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.592 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:15.592 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.592 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.592 [2024-11-15 11:01:22.446580] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:15.592 [2024-11-15 11:01:22.446682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:15.592 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.592 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:15.592 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.592 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.592 [2024-11-15 11:01:22.458574] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:15.592 [2024-11-15 11:01:22.458659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:15.592 [2024-11-15 11:01:22.458704] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:15.592 [2024-11-15 11:01:22.458734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:15.592 [2024-11-15 11:01:22.458755] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:15.592 [2024-11-15 11:01:22.458778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:15.592 [2024-11-15 11:01:22.458799] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:15.592 [2024-11-15 11:01:22.458830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:15.592 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.592 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:15.592 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.592 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.592 [2024-11-15 11:01:22.506387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:15.592 BaseBdev1 00:16:15.592 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.592 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:15.592 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:15.592 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:15.592 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:15.593 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:15.593 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:15.593 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:15.593 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.593 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.851 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.851 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:15.851 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.851 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.851 [ 00:16:15.851 { 00:16:15.851 "name": "BaseBdev1", 00:16:15.851 "aliases": [ 00:16:15.851 "1f480208-5792-4cc0-b2a9-adc71557d985" 00:16:15.851 ], 00:16:15.851 "product_name": "Malloc disk", 00:16:15.851 "block_size": 512, 00:16:15.851 "num_blocks": 65536, 00:16:15.851 "uuid": "1f480208-5792-4cc0-b2a9-adc71557d985", 00:16:15.851 "assigned_rate_limits": { 00:16:15.851 "rw_ios_per_sec": 0, 00:16:15.851 "rw_mbytes_per_sec": 0, 00:16:15.851 "r_mbytes_per_sec": 0, 00:16:15.851 "w_mbytes_per_sec": 0 00:16:15.851 }, 00:16:15.851 "claimed": true, 00:16:15.851 "claim_type": "exclusive_write", 00:16:15.851 "zoned": false, 00:16:15.851 "supported_io_types": { 00:16:15.851 "read": true, 00:16:15.851 "write": true, 00:16:15.851 "unmap": true, 00:16:15.851 "flush": true, 00:16:15.851 "reset": true, 00:16:15.851 "nvme_admin": false, 00:16:15.851 "nvme_io": false, 00:16:15.851 "nvme_io_md": false, 00:16:15.851 "write_zeroes": true, 00:16:15.851 "zcopy": true, 00:16:15.851 "get_zone_info": false, 00:16:15.851 "zone_management": false, 00:16:15.851 "zone_append": false, 00:16:15.851 "compare": false, 00:16:15.851 "compare_and_write": false, 00:16:15.851 "abort": true, 00:16:15.851 "seek_hole": false, 00:16:15.851 "seek_data": false, 00:16:15.851 "copy": true, 00:16:15.851 "nvme_iov_md": false 00:16:15.851 }, 00:16:15.851 "memory_domains": [ 00:16:15.851 { 00:16:15.851 "dma_device_id": "system", 00:16:15.851 "dma_device_type": 1 00:16:15.851 }, 00:16:15.851 { 00:16:15.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.851 "dma_device_type": 2 00:16:15.851 } 00:16:15.851 ], 00:16:15.851 "driver_specific": {} 00:16:15.851 } 00:16:15.851 ] 00:16:15.851 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.851 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:15.851 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:15.851 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:15.851 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:15.851 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.851 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.851 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:15.851 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.852 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.852 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.852 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.852 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.852 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.852 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.852 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.852 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.852 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.852 "name": "Existed_Raid", 00:16:15.852 "uuid": "0ff257de-5c5d-49ca-a1a5-8491152d5133", 00:16:15.852 "strip_size_kb": 64, 00:16:15.852 "state": "configuring", 00:16:15.852 "raid_level": "raid5f", 00:16:15.852 "superblock": true, 00:16:15.852 "num_base_bdevs": 4, 00:16:15.852 "num_base_bdevs_discovered": 1, 00:16:15.852 "num_base_bdevs_operational": 4, 00:16:15.852 "base_bdevs_list": [ 00:16:15.852 { 00:16:15.852 "name": "BaseBdev1", 00:16:15.852 "uuid": "1f480208-5792-4cc0-b2a9-adc71557d985", 00:16:15.852 "is_configured": true, 00:16:15.852 "data_offset": 2048, 00:16:15.852 "data_size": 63488 00:16:15.852 }, 00:16:15.852 { 00:16:15.852 "name": "BaseBdev2", 00:16:15.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.852 "is_configured": false, 00:16:15.852 "data_offset": 0, 00:16:15.852 "data_size": 0 00:16:15.852 }, 00:16:15.852 { 00:16:15.852 "name": "BaseBdev3", 00:16:15.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.852 "is_configured": false, 00:16:15.852 "data_offset": 0, 00:16:15.852 "data_size": 0 00:16:15.852 }, 00:16:15.852 { 00:16:15.852 "name": "BaseBdev4", 00:16:15.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.852 "is_configured": false, 00:16:15.852 "data_offset": 0, 00:16:15.852 "data_size": 0 00:16:15.852 } 00:16:15.852 ] 00:16:15.852 }' 00:16:15.852 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.852 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.110 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:16.110 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.110 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.110 [2024-11-15 11:01:22.917724] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:16.110 [2024-11-15 11:01:22.917836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:16.110 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.110 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:16.110 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.110 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.110 [2024-11-15 11:01:22.929742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:16.110 [2024-11-15 11:01:22.931562] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:16.110 [2024-11-15 11:01:22.931637] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:16.110 [2024-11-15 11:01:22.931665] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:16.110 [2024-11-15 11:01:22.931689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:16.110 [2024-11-15 11:01:22.931707] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:16.110 [2024-11-15 11:01:22.931727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:16.110 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.110 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:16.110 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:16.110 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:16.110 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:16.110 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:16.110 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.110 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.110 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:16.110 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.110 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.110 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.110 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.110 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.110 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.110 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.110 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.110 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.110 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.110 "name": "Existed_Raid", 00:16:16.110 "uuid": "e7481c98-7329-42e7-8712-00b25b3ee8a3", 00:16:16.110 "strip_size_kb": 64, 00:16:16.110 "state": "configuring", 00:16:16.110 "raid_level": "raid5f", 00:16:16.110 "superblock": true, 00:16:16.110 "num_base_bdevs": 4, 00:16:16.110 "num_base_bdevs_discovered": 1, 00:16:16.110 "num_base_bdevs_operational": 4, 00:16:16.110 "base_bdevs_list": [ 00:16:16.110 { 00:16:16.110 "name": "BaseBdev1", 00:16:16.110 "uuid": "1f480208-5792-4cc0-b2a9-adc71557d985", 00:16:16.110 "is_configured": true, 00:16:16.110 "data_offset": 2048, 00:16:16.110 "data_size": 63488 00:16:16.110 }, 00:16:16.110 { 00:16:16.110 "name": "BaseBdev2", 00:16:16.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.110 "is_configured": false, 00:16:16.110 "data_offset": 0, 00:16:16.110 "data_size": 0 00:16:16.110 }, 00:16:16.110 { 00:16:16.110 "name": "BaseBdev3", 00:16:16.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.110 "is_configured": false, 00:16:16.110 "data_offset": 0, 00:16:16.110 "data_size": 0 00:16:16.110 }, 00:16:16.110 { 00:16:16.110 "name": "BaseBdev4", 00:16:16.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.110 "is_configured": false, 00:16:16.110 "data_offset": 0, 00:16:16.110 "data_size": 0 00:16:16.110 } 00:16:16.110 ] 00:16:16.110 }' 00:16:16.111 11:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.111 11:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.681 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:16.681 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.681 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.681 [2024-11-15 11:01:23.387530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:16.681 BaseBdev2 00:16:16.681 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.681 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:16.681 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:16.681 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:16.681 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:16.681 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:16.681 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:16.681 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:16.682 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.682 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.682 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.682 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:16.682 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.682 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.682 [ 00:16:16.682 { 00:16:16.682 "name": "BaseBdev2", 00:16:16.682 "aliases": [ 00:16:16.682 "fccf88aa-4ebd-47b5-baf4-bd5ecf32e9f9" 00:16:16.682 ], 00:16:16.682 "product_name": "Malloc disk", 00:16:16.682 "block_size": 512, 00:16:16.682 "num_blocks": 65536, 00:16:16.682 "uuid": "fccf88aa-4ebd-47b5-baf4-bd5ecf32e9f9", 00:16:16.682 "assigned_rate_limits": { 00:16:16.682 "rw_ios_per_sec": 0, 00:16:16.682 "rw_mbytes_per_sec": 0, 00:16:16.682 "r_mbytes_per_sec": 0, 00:16:16.682 "w_mbytes_per_sec": 0 00:16:16.682 }, 00:16:16.682 "claimed": true, 00:16:16.682 "claim_type": "exclusive_write", 00:16:16.682 "zoned": false, 00:16:16.682 "supported_io_types": { 00:16:16.682 "read": true, 00:16:16.682 "write": true, 00:16:16.682 "unmap": true, 00:16:16.682 "flush": true, 00:16:16.682 "reset": true, 00:16:16.682 "nvme_admin": false, 00:16:16.682 "nvme_io": false, 00:16:16.682 "nvme_io_md": false, 00:16:16.682 "write_zeroes": true, 00:16:16.682 "zcopy": true, 00:16:16.682 "get_zone_info": false, 00:16:16.682 "zone_management": false, 00:16:16.682 "zone_append": false, 00:16:16.682 "compare": false, 00:16:16.682 "compare_and_write": false, 00:16:16.682 "abort": true, 00:16:16.682 "seek_hole": false, 00:16:16.682 "seek_data": false, 00:16:16.682 "copy": true, 00:16:16.682 "nvme_iov_md": false 00:16:16.682 }, 00:16:16.682 "memory_domains": [ 00:16:16.682 { 00:16:16.682 "dma_device_id": "system", 00:16:16.682 "dma_device_type": 1 00:16:16.682 }, 00:16:16.682 { 00:16:16.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.682 "dma_device_type": 2 00:16:16.682 } 00:16:16.682 ], 00:16:16.682 "driver_specific": {} 00:16:16.682 } 00:16:16.682 ] 00:16:16.682 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.682 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:16.682 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:16.682 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:16.682 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:16.682 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:16.682 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:16.682 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.682 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.682 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:16.682 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.682 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.682 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.682 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.682 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.682 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.682 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.682 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.682 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.682 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.682 "name": "Existed_Raid", 00:16:16.682 "uuid": "e7481c98-7329-42e7-8712-00b25b3ee8a3", 00:16:16.682 "strip_size_kb": 64, 00:16:16.682 "state": "configuring", 00:16:16.682 "raid_level": "raid5f", 00:16:16.682 "superblock": true, 00:16:16.682 "num_base_bdevs": 4, 00:16:16.682 "num_base_bdevs_discovered": 2, 00:16:16.682 "num_base_bdevs_operational": 4, 00:16:16.682 "base_bdevs_list": [ 00:16:16.682 { 00:16:16.682 "name": "BaseBdev1", 00:16:16.682 "uuid": "1f480208-5792-4cc0-b2a9-adc71557d985", 00:16:16.682 "is_configured": true, 00:16:16.682 "data_offset": 2048, 00:16:16.682 "data_size": 63488 00:16:16.682 }, 00:16:16.682 { 00:16:16.682 "name": "BaseBdev2", 00:16:16.682 "uuid": "fccf88aa-4ebd-47b5-baf4-bd5ecf32e9f9", 00:16:16.682 "is_configured": true, 00:16:16.682 "data_offset": 2048, 00:16:16.682 "data_size": 63488 00:16:16.682 }, 00:16:16.682 { 00:16:16.682 "name": "BaseBdev3", 00:16:16.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.682 "is_configured": false, 00:16:16.682 "data_offset": 0, 00:16:16.682 "data_size": 0 00:16:16.682 }, 00:16:16.682 { 00:16:16.682 "name": "BaseBdev4", 00:16:16.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.682 "is_configured": false, 00:16:16.682 "data_offset": 0, 00:16:16.682 "data_size": 0 00:16:16.682 } 00:16:16.682 ] 00:16:16.682 }' 00:16:16.682 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.682 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.955 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:16.955 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.955 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.955 [2024-11-15 11:01:23.875465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:16.955 BaseBdev3 00:16:16.955 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.955 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:16.955 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:16.955 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:16.955 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:16.955 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:16.955 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:16.955 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:16.955 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.955 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.213 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.213 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:17.213 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.213 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.213 [ 00:16:17.213 { 00:16:17.213 "name": "BaseBdev3", 00:16:17.213 "aliases": [ 00:16:17.213 "30ba9bb7-d863-482c-9926-8562b030baed" 00:16:17.213 ], 00:16:17.213 "product_name": "Malloc disk", 00:16:17.213 "block_size": 512, 00:16:17.213 "num_blocks": 65536, 00:16:17.213 "uuid": "30ba9bb7-d863-482c-9926-8562b030baed", 00:16:17.213 "assigned_rate_limits": { 00:16:17.213 "rw_ios_per_sec": 0, 00:16:17.213 "rw_mbytes_per_sec": 0, 00:16:17.213 "r_mbytes_per_sec": 0, 00:16:17.213 "w_mbytes_per_sec": 0 00:16:17.213 }, 00:16:17.213 "claimed": true, 00:16:17.213 "claim_type": "exclusive_write", 00:16:17.213 "zoned": false, 00:16:17.213 "supported_io_types": { 00:16:17.213 "read": true, 00:16:17.213 "write": true, 00:16:17.213 "unmap": true, 00:16:17.213 "flush": true, 00:16:17.213 "reset": true, 00:16:17.213 "nvme_admin": false, 00:16:17.213 "nvme_io": false, 00:16:17.213 "nvme_io_md": false, 00:16:17.213 "write_zeroes": true, 00:16:17.213 "zcopy": true, 00:16:17.213 "get_zone_info": false, 00:16:17.213 "zone_management": false, 00:16:17.213 "zone_append": false, 00:16:17.213 "compare": false, 00:16:17.213 "compare_and_write": false, 00:16:17.213 "abort": true, 00:16:17.213 "seek_hole": false, 00:16:17.213 "seek_data": false, 00:16:17.213 "copy": true, 00:16:17.213 "nvme_iov_md": false 00:16:17.213 }, 00:16:17.213 "memory_domains": [ 00:16:17.213 { 00:16:17.213 "dma_device_id": "system", 00:16:17.213 "dma_device_type": 1 00:16:17.213 }, 00:16:17.213 { 00:16:17.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.213 "dma_device_type": 2 00:16:17.213 } 00:16:17.213 ], 00:16:17.213 "driver_specific": {} 00:16:17.213 } 00:16:17.213 ] 00:16:17.213 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.213 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:17.213 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:17.213 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:17.213 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:17.213 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.213 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.213 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.213 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.213 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.213 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.213 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.213 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.213 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.213 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.213 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.213 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.213 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.213 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.213 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.213 "name": "Existed_Raid", 00:16:17.213 "uuid": "e7481c98-7329-42e7-8712-00b25b3ee8a3", 00:16:17.213 "strip_size_kb": 64, 00:16:17.213 "state": "configuring", 00:16:17.213 "raid_level": "raid5f", 00:16:17.213 "superblock": true, 00:16:17.213 "num_base_bdevs": 4, 00:16:17.213 "num_base_bdevs_discovered": 3, 00:16:17.213 "num_base_bdevs_operational": 4, 00:16:17.213 "base_bdevs_list": [ 00:16:17.213 { 00:16:17.213 "name": "BaseBdev1", 00:16:17.213 "uuid": "1f480208-5792-4cc0-b2a9-adc71557d985", 00:16:17.213 "is_configured": true, 00:16:17.213 "data_offset": 2048, 00:16:17.213 "data_size": 63488 00:16:17.213 }, 00:16:17.213 { 00:16:17.213 "name": "BaseBdev2", 00:16:17.213 "uuid": "fccf88aa-4ebd-47b5-baf4-bd5ecf32e9f9", 00:16:17.213 "is_configured": true, 00:16:17.213 "data_offset": 2048, 00:16:17.213 "data_size": 63488 00:16:17.213 }, 00:16:17.213 { 00:16:17.213 "name": "BaseBdev3", 00:16:17.213 "uuid": "30ba9bb7-d863-482c-9926-8562b030baed", 00:16:17.213 "is_configured": true, 00:16:17.213 "data_offset": 2048, 00:16:17.213 "data_size": 63488 00:16:17.213 }, 00:16:17.213 { 00:16:17.213 "name": "BaseBdev4", 00:16:17.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.213 "is_configured": false, 00:16:17.213 "data_offset": 0, 00:16:17.213 "data_size": 0 00:16:17.213 } 00:16:17.213 ] 00:16:17.213 }' 00:16:17.213 11:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.213 11:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.472 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:17.472 11:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.472 11:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.472 [2024-11-15 11:01:24.384247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:17.472 [2024-11-15 11:01:24.384602] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:17.472 [2024-11-15 11:01:24.384657] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:17.472 [2024-11-15 11:01:24.384958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:17.472 BaseBdev4 00:16:17.472 11:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.472 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:17.472 11:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:16:17.472 11:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:17.472 11:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:17.472 11:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:17.472 11:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:17.472 11:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:17.472 11:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.472 11:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.472 [2024-11-15 11:01:24.392466] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:17.472 [2024-11-15 11:01:24.392527] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:17.472 [2024-11-15 11:01:24.392815] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.731 11:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.731 11:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:17.731 11:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.731 11:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.731 [ 00:16:17.731 { 00:16:17.731 "name": "BaseBdev4", 00:16:17.731 "aliases": [ 00:16:17.731 "2a0c3743-3ffe-4bf7-9fb7-f521016242a4" 00:16:17.731 ], 00:16:17.731 "product_name": "Malloc disk", 00:16:17.731 "block_size": 512, 00:16:17.731 "num_blocks": 65536, 00:16:17.731 "uuid": "2a0c3743-3ffe-4bf7-9fb7-f521016242a4", 00:16:17.731 "assigned_rate_limits": { 00:16:17.731 "rw_ios_per_sec": 0, 00:16:17.731 "rw_mbytes_per_sec": 0, 00:16:17.731 "r_mbytes_per_sec": 0, 00:16:17.731 "w_mbytes_per_sec": 0 00:16:17.731 }, 00:16:17.731 "claimed": true, 00:16:17.731 "claim_type": "exclusive_write", 00:16:17.731 "zoned": false, 00:16:17.731 "supported_io_types": { 00:16:17.731 "read": true, 00:16:17.731 "write": true, 00:16:17.731 "unmap": true, 00:16:17.731 "flush": true, 00:16:17.731 "reset": true, 00:16:17.731 "nvme_admin": false, 00:16:17.731 "nvme_io": false, 00:16:17.731 "nvme_io_md": false, 00:16:17.731 "write_zeroes": true, 00:16:17.731 "zcopy": true, 00:16:17.731 "get_zone_info": false, 00:16:17.731 "zone_management": false, 00:16:17.731 "zone_append": false, 00:16:17.731 "compare": false, 00:16:17.731 "compare_and_write": false, 00:16:17.731 "abort": true, 00:16:17.731 "seek_hole": false, 00:16:17.731 "seek_data": false, 00:16:17.731 "copy": true, 00:16:17.731 "nvme_iov_md": false 00:16:17.731 }, 00:16:17.731 "memory_domains": [ 00:16:17.731 { 00:16:17.731 "dma_device_id": "system", 00:16:17.731 "dma_device_type": 1 00:16:17.731 }, 00:16:17.731 { 00:16:17.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.731 "dma_device_type": 2 00:16:17.731 } 00:16:17.732 ], 00:16:17.732 "driver_specific": {} 00:16:17.732 } 00:16:17.732 ] 00:16:17.732 11:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.732 11:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:17.732 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:17.732 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:17.732 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:17.732 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.732 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.732 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.732 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.732 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.732 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.732 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.732 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.732 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.732 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.732 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.732 11:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.732 11:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.732 11:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.732 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.732 "name": "Existed_Raid", 00:16:17.732 "uuid": "e7481c98-7329-42e7-8712-00b25b3ee8a3", 00:16:17.732 "strip_size_kb": 64, 00:16:17.732 "state": "online", 00:16:17.732 "raid_level": "raid5f", 00:16:17.732 "superblock": true, 00:16:17.732 "num_base_bdevs": 4, 00:16:17.732 "num_base_bdevs_discovered": 4, 00:16:17.732 "num_base_bdevs_operational": 4, 00:16:17.732 "base_bdevs_list": [ 00:16:17.732 { 00:16:17.732 "name": "BaseBdev1", 00:16:17.732 "uuid": "1f480208-5792-4cc0-b2a9-adc71557d985", 00:16:17.732 "is_configured": true, 00:16:17.732 "data_offset": 2048, 00:16:17.732 "data_size": 63488 00:16:17.732 }, 00:16:17.732 { 00:16:17.732 "name": "BaseBdev2", 00:16:17.732 "uuid": "fccf88aa-4ebd-47b5-baf4-bd5ecf32e9f9", 00:16:17.732 "is_configured": true, 00:16:17.732 "data_offset": 2048, 00:16:17.732 "data_size": 63488 00:16:17.732 }, 00:16:17.732 { 00:16:17.732 "name": "BaseBdev3", 00:16:17.732 "uuid": "30ba9bb7-d863-482c-9926-8562b030baed", 00:16:17.732 "is_configured": true, 00:16:17.732 "data_offset": 2048, 00:16:17.732 "data_size": 63488 00:16:17.732 }, 00:16:17.732 { 00:16:17.732 "name": "BaseBdev4", 00:16:17.732 "uuid": "2a0c3743-3ffe-4bf7-9fb7-f521016242a4", 00:16:17.732 "is_configured": true, 00:16:17.732 "data_offset": 2048, 00:16:17.732 "data_size": 63488 00:16:17.732 } 00:16:17.732 ] 00:16:17.732 }' 00:16:17.732 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.732 11:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.999 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:17.999 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:17.999 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:17.999 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:17.999 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:17.999 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:17.999 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:17.999 11:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.999 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:17.999 11:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.999 [2024-11-15 11:01:24.852528] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:17.999 11:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.999 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:17.999 "name": "Existed_Raid", 00:16:17.999 "aliases": [ 00:16:17.999 "e7481c98-7329-42e7-8712-00b25b3ee8a3" 00:16:17.999 ], 00:16:17.999 "product_name": "Raid Volume", 00:16:17.999 "block_size": 512, 00:16:17.999 "num_blocks": 190464, 00:16:17.999 "uuid": "e7481c98-7329-42e7-8712-00b25b3ee8a3", 00:16:17.999 "assigned_rate_limits": { 00:16:17.999 "rw_ios_per_sec": 0, 00:16:17.999 "rw_mbytes_per_sec": 0, 00:16:17.999 "r_mbytes_per_sec": 0, 00:16:17.999 "w_mbytes_per_sec": 0 00:16:17.999 }, 00:16:17.999 "claimed": false, 00:16:17.999 "zoned": false, 00:16:17.999 "supported_io_types": { 00:16:17.999 "read": true, 00:16:17.999 "write": true, 00:16:17.999 "unmap": false, 00:16:17.999 "flush": false, 00:16:17.999 "reset": true, 00:16:17.999 "nvme_admin": false, 00:16:17.999 "nvme_io": false, 00:16:17.999 "nvme_io_md": false, 00:16:17.999 "write_zeroes": true, 00:16:17.999 "zcopy": false, 00:16:17.999 "get_zone_info": false, 00:16:17.999 "zone_management": false, 00:16:17.999 "zone_append": false, 00:16:17.999 "compare": false, 00:16:17.999 "compare_and_write": false, 00:16:17.999 "abort": false, 00:16:17.999 "seek_hole": false, 00:16:17.999 "seek_data": false, 00:16:17.999 "copy": false, 00:16:17.999 "nvme_iov_md": false 00:16:17.999 }, 00:16:17.999 "driver_specific": { 00:16:17.999 "raid": { 00:16:17.999 "uuid": "e7481c98-7329-42e7-8712-00b25b3ee8a3", 00:16:17.999 "strip_size_kb": 64, 00:16:17.999 "state": "online", 00:16:17.999 "raid_level": "raid5f", 00:16:17.999 "superblock": true, 00:16:17.999 "num_base_bdevs": 4, 00:16:17.999 "num_base_bdevs_discovered": 4, 00:16:18.000 "num_base_bdevs_operational": 4, 00:16:18.000 "base_bdevs_list": [ 00:16:18.000 { 00:16:18.000 "name": "BaseBdev1", 00:16:18.000 "uuid": "1f480208-5792-4cc0-b2a9-adc71557d985", 00:16:18.000 "is_configured": true, 00:16:18.000 "data_offset": 2048, 00:16:18.000 "data_size": 63488 00:16:18.000 }, 00:16:18.000 { 00:16:18.000 "name": "BaseBdev2", 00:16:18.000 "uuid": "fccf88aa-4ebd-47b5-baf4-bd5ecf32e9f9", 00:16:18.000 "is_configured": true, 00:16:18.000 "data_offset": 2048, 00:16:18.000 "data_size": 63488 00:16:18.000 }, 00:16:18.000 { 00:16:18.000 "name": "BaseBdev3", 00:16:18.000 "uuid": "30ba9bb7-d863-482c-9926-8562b030baed", 00:16:18.000 "is_configured": true, 00:16:18.000 "data_offset": 2048, 00:16:18.000 "data_size": 63488 00:16:18.000 }, 00:16:18.000 { 00:16:18.000 "name": "BaseBdev4", 00:16:18.000 "uuid": "2a0c3743-3ffe-4bf7-9fb7-f521016242a4", 00:16:18.000 "is_configured": true, 00:16:18.000 "data_offset": 2048, 00:16:18.000 "data_size": 63488 00:16:18.000 } 00:16:18.000 ] 00:16:18.000 } 00:16:18.000 } 00:16:18.000 }' 00:16:18.000 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:18.262 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:18.262 BaseBdev2 00:16:18.262 BaseBdev3 00:16:18.262 BaseBdev4' 00:16:18.262 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.262 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:18.262 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:18.262 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.262 11:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:18.262 11:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.262 11:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.262 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.262 [2024-11-15 11:01:25.159766] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:18.521 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.521 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:18.521 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:18.521 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:18.521 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:18.521 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:18.521 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:18.521 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.521 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.521 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.521 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.521 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:18.521 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.521 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.521 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.521 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.521 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.521 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.521 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.521 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.521 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.521 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.521 "name": "Existed_Raid", 00:16:18.521 "uuid": "e7481c98-7329-42e7-8712-00b25b3ee8a3", 00:16:18.521 "strip_size_kb": 64, 00:16:18.521 "state": "online", 00:16:18.521 "raid_level": "raid5f", 00:16:18.521 "superblock": true, 00:16:18.521 "num_base_bdevs": 4, 00:16:18.521 "num_base_bdevs_discovered": 3, 00:16:18.521 "num_base_bdevs_operational": 3, 00:16:18.521 "base_bdevs_list": [ 00:16:18.521 { 00:16:18.521 "name": null, 00:16:18.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.521 "is_configured": false, 00:16:18.521 "data_offset": 0, 00:16:18.521 "data_size": 63488 00:16:18.521 }, 00:16:18.521 { 00:16:18.521 "name": "BaseBdev2", 00:16:18.521 "uuid": "fccf88aa-4ebd-47b5-baf4-bd5ecf32e9f9", 00:16:18.521 "is_configured": true, 00:16:18.521 "data_offset": 2048, 00:16:18.521 "data_size": 63488 00:16:18.521 }, 00:16:18.521 { 00:16:18.521 "name": "BaseBdev3", 00:16:18.521 "uuid": "30ba9bb7-d863-482c-9926-8562b030baed", 00:16:18.521 "is_configured": true, 00:16:18.521 "data_offset": 2048, 00:16:18.521 "data_size": 63488 00:16:18.521 }, 00:16:18.521 { 00:16:18.521 "name": "BaseBdev4", 00:16:18.521 "uuid": "2a0c3743-3ffe-4bf7-9fb7-f521016242a4", 00:16:18.521 "is_configured": true, 00:16:18.521 "data_offset": 2048, 00:16:18.521 "data_size": 63488 00:16:18.521 } 00:16:18.521 ] 00:16:18.521 }' 00:16:18.521 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.521 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.780 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:18.780 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:18.780 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.780 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.780 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.780 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:19.039 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.039 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:19.039 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:19.039 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:19.039 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.039 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.039 [2024-11-15 11:01:25.746229] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:19.039 [2024-11-15 11:01:25.746462] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:19.039 [2024-11-15 11:01:25.841251] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.039 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.039 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:19.039 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:19.039 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.039 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.039 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.039 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:19.039 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.039 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:19.039 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:19.039 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:19.039 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.039 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.039 [2024-11-15 11:01:25.897195] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:19.298 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.298 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:19.298 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:19.298 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.298 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.298 11:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.298 11:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:19.298 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.298 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:19.298 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:19.298 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:19.298 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.298 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.298 [2024-11-15 11:01:26.052938] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:19.298 [2024-11-15 11:01:26.053051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:19.298 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.298 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:19.298 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:19.298 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.298 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.298 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.298 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:19.298 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.298 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:19.298 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:19.298 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:19.298 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:19.298 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:19.298 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:19.298 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.298 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.558 BaseBdev2 00:16:19.558 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.558 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:19.558 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:19.558 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:19.558 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:19.558 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:19.558 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:19.558 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:19.558 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.558 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.558 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.558 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:19.558 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.558 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.558 [ 00:16:19.558 { 00:16:19.558 "name": "BaseBdev2", 00:16:19.558 "aliases": [ 00:16:19.558 "64a9f031-8ee7-4174-80ef-c414c431730f" 00:16:19.558 ], 00:16:19.558 "product_name": "Malloc disk", 00:16:19.558 "block_size": 512, 00:16:19.558 "num_blocks": 65536, 00:16:19.558 "uuid": "64a9f031-8ee7-4174-80ef-c414c431730f", 00:16:19.558 "assigned_rate_limits": { 00:16:19.558 "rw_ios_per_sec": 0, 00:16:19.558 "rw_mbytes_per_sec": 0, 00:16:19.558 "r_mbytes_per_sec": 0, 00:16:19.558 "w_mbytes_per_sec": 0 00:16:19.558 }, 00:16:19.558 "claimed": false, 00:16:19.558 "zoned": false, 00:16:19.559 "supported_io_types": { 00:16:19.559 "read": true, 00:16:19.559 "write": true, 00:16:19.559 "unmap": true, 00:16:19.559 "flush": true, 00:16:19.559 "reset": true, 00:16:19.559 "nvme_admin": false, 00:16:19.559 "nvme_io": false, 00:16:19.559 "nvme_io_md": false, 00:16:19.559 "write_zeroes": true, 00:16:19.559 "zcopy": true, 00:16:19.559 "get_zone_info": false, 00:16:19.559 "zone_management": false, 00:16:19.559 "zone_append": false, 00:16:19.559 "compare": false, 00:16:19.559 "compare_and_write": false, 00:16:19.559 "abort": true, 00:16:19.559 "seek_hole": false, 00:16:19.559 "seek_data": false, 00:16:19.559 "copy": true, 00:16:19.559 "nvme_iov_md": false 00:16:19.559 }, 00:16:19.559 "memory_domains": [ 00:16:19.559 { 00:16:19.559 "dma_device_id": "system", 00:16:19.559 "dma_device_type": 1 00:16:19.559 }, 00:16:19.559 { 00:16:19.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.559 "dma_device_type": 2 00:16:19.559 } 00:16:19.559 ], 00:16:19.559 "driver_specific": {} 00:16:19.559 } 00:16:19.559 ] 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.559 BaseBdev3 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.559 [ 00:16:19.559 { 00:16:19.559 "name": "BaseBdev3", 00:16:19.559 "aliases": [ 00:16:19.559 "82ce1fbb-1d04-4146-aba9-cd6c1d8b1533" 00:16:19.559 ], 00:16:19.559 "product_name": "Malloc disk", 00:16:19.559 "block_size": 512, 00:16:19.559 "num_blocks": 65536, 00:16:19.559 "uuid": "82ce1fbb-1d04-4146-aba9-cd6c1d8b1533", 00:16:19.559 "assigned_rate_limits": { 00:16:19.559 "rw_ios_per_sec": 0, 00:16:19.559 "rw_mbytes_per_sec": 0, 00:16:19.559 "r_mbytes_per_sec": 0, 00:16:19.559 "w_mbytes_per_sec": 0 00:16:19.559 }, 00:16:19.559 "claimed": false, 00:16:19.559 "zoned": false, 00:16:19.559 "supported_io_types": { 00:16:19.559 "read": true, 00:16:19.559 "write": true, 00:16:19.559 "unmap": true, 00:16:19.559 "flush": true, 00:16:19.559 "reset": true, 00:16:19.559 "nvme_admin": false, 00:16:19.559 "nvme_io": false, 00:16:19.559 "nvme_io_md": false, 00:16:19.559 "write_zeroes": true, 00:16:19.559 "zcopy": true, 00:16:19.559 "get_zone_info": false, 00:16:19.559 "zone_management": false, 00:16:19.559 "zone_append": false, 00:16:19.559 "compare": false, 00:16:19.559 "compare_and_write": false, 00:16:19.559 "abort": true, 00:16:19.559 "seek_hole": false, 00:16:19.559 "seek_data": false, 00:16:19.559 "copy": true, 00:16:19.559 "nvme_iov_md": false 00:16:19.559 }, 00:16:19.559 "memory_domains": [ 00:16:19.559 { 00:16:19.559 "dma_device_id": "system", 00:16:19.559 "dma_device_type": 1 00:16:19.559 }, 00:16:19.559 { 00:16:19.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.559 "dma_device_type": 2 00:16:19.559 } 00:16:19.559 ], 00:16:19.559 "driver_specific": {} 00:16:19.559 } 00:16:19.559 ] 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.559 BaseBdev4 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.559 [ 00:16:19.559 { 00:16:19.559 "name": "BaseBdev4", 00:16:19.559 "aliases": [ 00:16:19.559 "e728fff6-9b26-4b20-8189-fa2cfc038c2f" 00:16:19.559 ], 00:16:19.559 "product_name": "Malloc disk", 00:16:19.559 "block_size": 512, 00:16:19.559 "num_blocks": 65536, 00:16:19.559 "uuid": "e728fff6-9b26-4b20-8189-fa2cfc038c2f", 00:16:19.559 "assigned_rate_limits": { 00:16:19.559 "rw_ios_per_sec": 0, 00:16:19.559 "rw_mbytes_per_sec": 0, 00:16:19.559 "r_mbytes_per_sec": 0, 00:16:19.559 "w_mbytes_per_sec": 0 00:16:19.559 }, 00:16:19.559 "claimed": false, 00:16:19.559 "zoned": false, 00:16:19.559 "supported_io_types": { 00:16:19.559 "read": true, 00:16:19.559 "write": true, 00:16:19.559 "unmap": true, 00:16:19.559 "flush": true, 00:16:19.559 "reset": true, 00:16:19.559 "nvme_admin": false, 00:16:19.559 "nvme_io": false, 00:16:19.559 "nvme_io_md": false, 00:16:19.559 "write_zeroes": true, 00:16:19.559 "zcopy": true, 00:16:19.559 "get_zone_info": false, 00:16:19.559 "zone_management": false, 00:16:19.559 "zone_append": false, 00:16:19.559 "compare": false, 00:16:19.559 "compare_and_write": false, 00:16:19.559 "abort": true, 00:16:19.559 "seek_hole": false, 00:16:19.559 "seek_data": false, 00:16:19.559 "copy": true, 00:16:19.559 "nvme_iov_md": false 00:16:19.559 }, 00:16:19.559 "memory_domains": [ 00:16:19.559 { 00:16:19.559 "dma_device_id": "system", 00:16:19.559 "dma_device_type": 1 00:16:19.559 }, 00:16:19.559 { 00:16:19.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.559 "dma_device_type": 2 00:16:19.559 } 00:16:19.559 ], 00:16:19.559 "driver_specific": {} 00:16:19.559 } 00:16:19.559 ] 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.559 [2024-11-15 11:01:26.445791] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:19.559 [2024-11-15 11:01:26.445892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:19.559 [2024-11-15 11:01:26.445939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:19.559 [2024-11-15 11:01:26.447987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:19.559 [2024-11-15 11:01:26.448079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:19.559 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.560 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:19.560 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.560 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.560 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.560 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.560 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:19.560 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.560 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.560 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.560 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.560 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.560 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.560 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.560 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.560 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.822 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.822 "name": "Existed_Raid", 00:16:19.822 "uuid": "38783967-9a5d-4f79-929b-005d51a93e3e", 00:16:19.822 "strip_size_kb": 64, 00:16:19.822 "state": "configuring", 00:16:19.822 "raid_level": "raid5f", 00:16:19.822 "superblock": true, 00:16:19.822 "num_base_bdevs": 4, 00:16:19.822 "num_base_bdevs_discovered": 3, 00:16:19.822 "num_base_bdevs_operational": 4, 00:16:19.822 "base_bdevs_list": [ 00:16:19.822 { 00:16:19.823 "name": "BaseBdev1", 00:16:19.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.823 "is_configured": false, 00:16:19.823 "data_offset": 0, 00:16:19.823 "data_size": 0 00:16:19.823 }, 00:16:19.823 { 00:16:19.823 "name": "BaseBdev2", 00:16:19.823 "uuid": "64a9f031-8ee7-4174-80ef-c414c431730f", 00:16:19.823 "is_configured": true, 00:16:19.823 "data_offset": 2048, 00:16:19.823 "data_size": 63488 00:16:19.823 }, 00:16:19.823 { 00:16:19.823 "name": "BaseBdev3", 00:16:19.823 "uuid": "82ce1fbb-1d04-4146-aba9-cd6c1d8b1533", 00:16:19.823 "is_configured": true, 00:16:19.823 "data_offset": 2048, 00:16:19.823 "data_size": 63488 00:16:19.823 }, 00:16:19.823 { 00:16:19.823 "name": "BaseBdev4", 00:16:19.823 "uuid": "e728fff6-9b26-4b20-8189-fa2cfc038c2f", 00:16:19.823 "is_configured": true, 00:16:19.823 "data_offset": 2048, 00:16:19.823 "data_size": 63488 00:16:19.823 } 00:16:19.823 ] 00:16:19.823 }' 00:16:19.823 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.823 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.083 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:20.083 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.083 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.083 [2024-11-15 11:01:26.817181] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:20.083 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.083 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:20.083 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.083 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.083 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.083 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.083 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:20.083 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.083 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.083 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.083 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.083 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.083 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.084 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.084 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.084 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.084 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.084 "name": "Existed_Raid", 00:16:20.084 "uuid": "38783967-9a5d-4f79-929b-005d51a93e3e", 00:16:20.084 "strip_size_kb": 64, 00:16:20.084 "state": "configuring", 00:16:20.084 "raid_level": "raid5f", 00:16:20.084 "superblock": true, 00:16:20.084 "num_base_bdevs": 4, 00:16:20.084 "num_base_bdevs_discovered": 2, 00:16:20.084 "num_base_bdevs_operational": 4, 00:16:20.084 "base_bdevs_list": [ 00:16:20.084 { 00:16:20.084 "name": "BaseBdev1", 00:16:20.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.084 "is_configured": false, 00:16:20.084 "data_offset": 0, 00:16:20.084 "data_size": 0 00:16:20.084 }, 00:16:20.084 { 00:16:20.084 "name": null, 00:16:20.084 "uuid": "64a9f031-8ee7-4174-80ef-c414c431730f", 00:16:20.084 "is_configured": false, 00:16:20.084 "data_offset": 0, 00:16:20.084 "data_size": 63488 00:16:20.084 }, 00:16:20.084 { 00:16:20.084 "name": "BaseBdev3", 00:16:20.084 "uuid": "82ce1fbb-1d04-4146-aba9-cd6c1d8b1533", 00:16:20.084 "is_configured": true, 00:16:20.084 "data_offset": 2048, 00:16:20.084 "data_size": 63488 00:16:20.084 }, 00:16:20.084 { 00:16:20.084 "name": "BaseBdev4", 00:16:20.084 "uuid": "e728fff6-9b26-4b20-8189-fa2cfc038c2f", 00:16:20.084 "is_configured": true, 00:16:20.084 "data_offset": 2048, 00:16:20.084 "data_size": 63488 00:16:20.084 } 00:16:20.084 ] 00:16:20.084 }' 00:16:20.084 11:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.084 11:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.669 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.669 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:20.669 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.669 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.669 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.669 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:20.669 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:20.669 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.669 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.669 [2024-11-15 11:01:27.364516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:20.669 BaseBdev1 00:16:20.669 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.669 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:20.669 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:20.669 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:20.669 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:20.669 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:20.669 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:20.669 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:20.669 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.669 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.669 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.669 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:20.669 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.669 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.669 [ 00:16:20.669 { 00:16:20.669 "name": "BaseBdev1", 00:16:20.669 "aliases": [ 00:16:20.669 "9aa82b8b-617e-4760-a21d-02a22b517463" 00:16:20.669 ], 00:16:20.669 "product_name": "Malloc disk", 00:16:20.669 "block_size": 512, 00:16:20.669 "num_blocks": 65536, 00:16:20.669 "uuid": "9aa82b8b-617e-4760-a21d-02a22b517463", 00:16:20.669 "assigned_rate_limits": { 00:16:20.669 "rw_ios_per_sec": 0, 00:16:20.669 "rw_mbytes_per_sec": 0, 00:16:20.669 "r_mbytes_per_sec": 0, 00:16:20.669 "w_mbytes_per_sec": 0 00:16:20.669 }, 00:16:20.669 "claimed": true, 00:16:20.669 "claim_type": "exclusive_write", 00:16:20.669 "zoned": false, 00:16:20.669 "supported_io_types": { 00:16:20.669 "read": true, 00:16:20.669 "write": true, 00:16:20.669 "unmap": true, 00:16:20.669 "flush": true, 00:16:20.669 "reset": true, 00:16:20.669 "nvme_admin": false, 00:16:20.669 "nvme_io": false, 00:16:20.669 "nvme_io_md": false, 00:16:20.669 "write_zeroes": true, 00:16:20.669 "zcopy": true, 00:16:20.669 "get_zone_info": false, 00:16:20.669 "zone_management": false, 00:16:20.669 "zone_append": false, 00:16:20.669 "compare": false, 00:16:20.669 "compare_and_write": false, 00:16:20.669 "abort": true, 00:16:20.669 "seek_hole": false, 00:16:20.669 "seek_data": false, 00:16:20.669 "copy": true, 00:16:20.669 "nvme_iov_md": false 00:16:20.669 }, 00:16:20.669 "memory_domains": [ 00:16:20.669 { 00:16:20.669 "dma_device_id": "system", 00:16:20.669 "dma_device_type": 1 00:16:20.669 }, 00:16:20.669 { 00:16:20.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.669 "dma_device_type": 2 00:16:20.669 } 00:16:20.669 ], 00:16:20.669 "driver_specific": {} 00:16:20.669 } 00:16:20.670 ] 00:16:20.670 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.670 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:20.670 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:20.670 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.670 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.670 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.670 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.670 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:20.670 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.670 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.670 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.670 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.670 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.670 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.670 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.670 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.670 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.670 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.670 "name": "Existed_Raid", 00:16:20.670 "uuid": "38783967-9a5d-4f79-929b-005d51a93e3e", 00:16:20.670 "strip_size_kb": 64, 00:16:20.670 "state": "configuring", 00:16:20.670 "raid_level": "raid5f", 00:16:20.670 "superblock": true, 00:16:20.670 "num_base_bdevs": 4, 00:16:20.670 "num_base_bdevs_discovered": 3, 00:16:20.670 "num_base_bdevs_operational": 4, 00:16:20.670 "base_bdevs_list": [ 00:16:20.670 { 00:16:20.670 "name": "BaseBdev1", 00:16:20.670 "uuid": "9aa82b8b-617e-4760-a21d-02a22b517463", 00:16:20.670 "is_configured": true, 00:16:20.670 "data_offset": 2048, 00:16:20.670 "data_size": 63488 00:16:20.670 }, 00:16:20.670 { 00:16:20.670 "name": null, 00:16:20.670 "uuid": "64a9f031-8ee7-4174-80ef-c414c431730f", 00:16:20.670 "is_configured": false, 00:16:20.670 "data_offset": 0, 00:16:20.670 "data_size": 63488 00:16:20.670 }, 00:16:20.670 { 00:16:20.670 "name": "BaseBdev3", 00:16:20.670 "uuid": "82ce1fbb-1d04-4146-aba9-cd6c1d8b1533", 00:16:20.670 "is_configured": true, 00:16:20.670 "data_offset": 2048, 00:16:20.670 "data_size": 63488 00:16:20.670 }, 00:16:20.670 { 00:16:20.670 "name": "BaseBdev4", 00:16:20.670 "uuid": "e728fff6-9b26-4b20-8189-fa2cfc038c2f", 00:16:20.670 "is_configured": true, 00:16:20.670 "data_offset": 2048, 00:16:20.670 "data_size": 63488 00:16:20.670 } 00:16:20.670 ] 00:16:20.670 }' 00:16:20.670 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.670 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.240 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.240 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.240 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.240 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:21.240 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.240 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:21.240 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:21.240 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.240 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.240 [2024-11-15 11:01:27.947616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:21.240 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.240 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:21.240 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.240 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.240 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.240 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.240 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:21.240 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.240 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.240 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.240 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.240 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.240 11:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.240 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.240 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.240 11:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.240 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.240 "name": "Existed_Raid", 00:16:21.240 "uuid": "38783967-9a5d-4f79-929b-005d51a93e3e", 00:16:21.240 "strip_size_kb": 64, 00:16:21.240 "state": "configuring", 00:16:21.240 "raid_level": "raid5f", 00:16:21.240 "superblock": true, 00:16:21.240 "num_base_bdevs": 4, 00:16:21.240 "num_base_bdevs_discovered": 2, 00:16:21.240 "num_base_bdevs_operational": 4, 00:16:21.240 "base_bdevs_list": [ 00:16:21.240 { 00:16:21.240 "name": "BaseBdev1", 00:16:21.240 "uuid": "9aa82b8b-617e-4760-a21d-02a22b517463", 00:16:21.240 "is_configured": true, 00:16:21.240 "data_offset": 2048, 00:16:21.240 "data_size": 63488 00:16:21.240 }, 00:16:21.240 { 00:16:21.240 "name": null, 00:16:21.240 "uuid": "64a9f031-8ee7-4174-80ef-c414c431730f", 00:16:21.240 "is_configured": false, 00:16:21.240 "data_offset": 0, 00:16:21.240 "data_size": 63488 00:16:21.240 }, 00:16:21.240 { 00:16:21.240 "name": null, 00:16:21.240 "uuid": "82ce1fbb-1d04-4146-aba9-cd6c1d8b1533", 00:16:21.240 "is_configured": false, 00:16:21.240 "data_offset": 0, 00:16:21.240 "data_size": 63488 00:16:21.240 }, 00:16:21.240 { 00:16:21.240 "name": "BaseBdev4", 00:16:21.240 "uuid": "e728fff6-9b26-4b20-8189-fa2cfc038c2f", 00:16:21.240 "is_configured": true, 00:16:21.240 "data_offset": 2048, 00:16:21.240 "data_size": 63488 00:16:21.240 } 00:16:21.240 ] 00:16:21.240 }' 00:16:21.240 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.240 11:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.500 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.500 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:21.500 11:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.500 11:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.500 11:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.500 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:21.500 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:21.500 11:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.500 11:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.500 [2024-11-15 11:01:28.378860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:21.500 11:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.500 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:21.500 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.500 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.500 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.500 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.500 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:21.500 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.500 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.500 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.500 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.500 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.500 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.500 11:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.500 11:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.500 11:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.760 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.760 "name": "Existed_Raid", 00:16:21.760 "uuid": "38783967-9a5d-4f79-929b-005d51a93e3e", 00:16:21.760 "strip_size_kb": 64, 00:16:21.760 "state": "configuring", 00:16:21.760 "raid_level": "raid5f", 00:16:21.760 "superblock": true, 00:16:21.760 "num_base_bdevs": 4, 00:16:21.760 "num_base_bdevs_discovered": 3, 00:16:21.760 "num_base_bdevs_operational": 4, 00:16:21.760 "base_bdevs_list": [ 00:16:21.760 { 00:16:21.760 "name": "BaseBdev1", 00:16:21.760 "uuid": "9aa82b8b-617e-4760-a21d-02a22b517463", 00:16:21.760 "is_configured": true, 00:16:21.760 "data_offset": 2048, 00:16:21.760 "data_size": 63488 00:16:21.760 }, 00:16:21.760 { 00:16:21.760 "name": null, 00:16:21.760 "uuid": "64a9f031-8ee7-4174-80ef-c414c431730f", 00:16:21.760 "is_configured": false, 00:16:21.760 "data_offset": 0, 00:16:21.760 "data_size": 63488 00:16:21.760 }, 00:16:21.760 { 00:16:21.760 "name": "BaseBdev3", 00:16:21.760 "uuid": "82ce1fbb-1d04-4146-aba9-cd6c1d8b1533", 00:16:21.760 "is_configured": true, 00:16:21.760 "data_offset": 2048, 00:16:21.760 "data_size": 63488 00:16:21.760 }, 00:16:21.760 { 00:16:21.760 "name": "BaseBdev4", 00:16:21.760 "uuid": "e728fff6-9b26-4b20-8189-fa2cfc038c2f", 00:16:21.760 "is_configured": true, 00:16:21.760 "data_offset": 2048, 00:16:21.760 "data_size": 63488 00:16:21.760 } 00:16:21.760 ] 00:16:21.760 }' 00:16:21.760 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.760 11:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.021 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.021 11:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.021 11:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.021 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:22.021 11:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.021 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:22.021 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:22.021 11:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.021 11:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.021 [2024-11-15 11:01:28.818139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:22.021 11:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.021 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:22.021 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.021 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.021 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.021 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.021 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:22.021 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.021 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.021 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.021 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.021 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.021 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.021 11:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.021 11:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.021 11:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.281 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.281 "name": "Existed_Raid", 00:16:22.281 "uuid": "38783967-9a5d-4f79-929b-005d51a93e3e", 00:16:22.281 "strip_size_kb": 64, 00:16:22.281 "state": "configuring", 00:16:22.281 "raid_level": "raid5f", 00:16:22.281 "superblock": true, 00:16:22.281 "num_base_bdevs": 4, 00:16:22.281 "num_base_bdevs_discovered": 2, 00:16:22.281 "num_base_bdevs_operational": 4, 00:16:22.281 "base_bdevs_list": [ 00:16:22.281 { 00:16:22.281 "name": null, 00:16:22.281 "uuid": "9aa82b8b-617e-4760-a21d-02a22b517463", 00:16:22.281 "is_configured": false, 00:16:22.281 "data_offset": 0, 00:16:22.281 "data_size": 63488 00:16:22.281 }, 00:16:22.281 { 00:16:22.281 "name": null, 00:16:22.281 "uuid": "64a9f031-8ee7-4174-80ef-c414c431730f", 00:16:22.281 "is_configured": false, 00:16:22.281 "data_offset": 0, 00:16:22.281 "data_size": 63488 00:16:22.281 }, 00:16:22.281 { 00:16:22.281 "name": "BaseBdev3", 00:16:22.281 "uuid": "82ce1fbb-1d04-4146-aba9-cd6c1d8b1533", 00:16:22.281 "is_configured": true, 00:16:22.281 "data_offset": 2048, 00:16:22.281 "data_size": 63488 00:16:22.281 }, 00:16:22.281 { 00:16:22.281 "name": "BaseBdev4", 00:16:22.281 "uuid": "e728fff6-9b26-4b20-8189-fa2cfc038c2f", 00:16:22.281 "is_configured": true, 00:16:22.281 "data_offset": 2048, 00:16:22.281 "data_size": 63488 00:16:22.281 } 00:16:22.281 ] 00:16:22.281 }' 00:16:22.281 11:01:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.281 11:01:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.541 11:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:22.541 11:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.541 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.541 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.541 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.541 11:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:22.541 11:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:22.541 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.541 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.541 [2024-11-15 11:01:29.388034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:22.541 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.541 11:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:22.541 11:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.541 11:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.541 11:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.541 11:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.541 11:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:22.541 11:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.541 11:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.541 11:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.541 11:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.541 11:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.541 11:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.541 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.541 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.541 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.541 11:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.541 "name": "Existed_Raid", 00:16:22.541 "uuid": "38783967-9a5d-4f79-929b-005d51a93e3e", 00:16:22.541 "strip_size_kb": 64, 00:16:22.541 "state": "configuring", 00:16:22.541 "raid_level": "raid5f", 00:16:22.541 "superblock": true, 00:16:22.541 "num_base_bdevs": 4, 00:16:22.541 "num_base_bdevs_discovered": 3, 00:16:22.541 "num_base_bdevs_operational": 4, 00:16:22.542 "base_bdevs_list": [ 00:16:22.542 { 00:16:22.542 "name": null, 00:16:22.542 "uuid": "9aa82b8b-617e-4760-a21d-02a22b517463", 00:16:22.542 "is_configured": false, 00:16:22.542 "data_offset": 0, 00:16:22.542 "data_size": 63488 00:16:22.542 }, 00:16:22.542 { 00:16:22.542 "name": "BaseBdev2", 00:16:22.542 "uuid": "64a9f031-8ee7-4174-80ef-c414c431730f", 00:16:22.542 "is_configured": true, 00:16:22.542 "data_offset": 2048, 00:16:22.542 "data_size": 63488 00:16:22.542 }, 00:16:22.542 { 00:16:22.542 "name": "BaseBdev3", 00:16:22.542 "uuid": "82ce1fbb-1d04-4146-aba9-cd6c1d8b1533", 00:16:22.542 "is_configured": true, 00:16:22.542 "data_offset": 2048, 00:16:22.542 "data_size": 63488 00:16:22.542 }, 00:16:22.542 { 00:16:22.542 "name": "BaseBdev4", 00:16:22.542 "uuid": "e728fff6-9b26-4b20-8189-fa2cfc038c2f", 00:16:22.542 "is_configured": true, 00:16:22.542 "data_offset": 2048, 00:16:22.542 "data_size": 63488 00:16:22.542 } 00:16:22.542 ] 00:16:22.542 }' 00:16:22.542 11:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.542 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.112 11:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.112 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.112 11:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:23.112 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.112 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.112 11:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:23.112 11:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.112 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.112 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.112 11:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:23.112 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.112 11:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9aa82b8b-617e-4760-a21d-02a22b517463 00:16:23.112 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.112 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.112 [2024-11-15 11:01:29.962901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:23.112 [2024-11-15 11:01:29.963154] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:23.112 [2024-11-15 11:01:29.963167] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:23.112 [2024-11-15 11:01:29.963457] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:23.112 NewBaseBdev 00:16:23.112 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.112 11:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:23.112 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:16:23.112 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:23.112 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:23.112 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:23.112 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:23.112 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:23.112 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.112 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.112 [2024-11-15 11:01:29.971121] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:23.112 [2024-11-15 11:01:29.971190] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:23.112 [2024-11-15 11:01:29.971536] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.112 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.112 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:23.112 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.112 11:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.112 [ 00:16:23.112 { 00:16:23.112 "name": "NewBaseBdev", 00:16:23.112 "aliases": [ 00:16:23.112 "9aa82b8b-617e-4760-a21d-02a22b517463" 00:16:23.112 ], 00:16:23.112 "product_name": "Malloc disk", 00:16:23.112 "block_size": 512, 00:16:23.112 "num_blocks": 65536, 00:16:23.112 "uuid": "9aa82b8b-617e-4760-a21d-02a22b517463", 00:16:23.112 "assigned_rate_limits": { 00:16:23.112 "rw_ios_per_sec": 0, 00:16:23.112 "rw_mbytes_per_sec": 0, 00:16:23.112 "r_mbytes_per_sec": 0, 00:16:23.112 "w_mbytes_per_sec": 0 00:16:23.112 }, 00:16:23.112 "claimed": true, 00:16:23.112 "claim_type": "exclusive_write", 00:16:23.112 "zoned": false, 00:16:23.112 "supported_io_types": { 00:16:23.112 "read": true, 00:16:23.112 "write": true, 00:16:23.112 "unmap": true, 00:16:23.112 "flush": true, 00:16:23.112 "reset": true, 00:16:23.112 "nvme_admin": false, 00:16:23.112 "nvme_io": false, 00:16:23.112 "nvme_io_md": false, 00:16:23.112 "write_zeroes": true, 00:16:23.112 "zcopy": true, 00:16:23.112 "get_zone_info": false, 00:16:23.112 "zone_management": false, 00:16:23.112 "zone_append": false, 00:16:23.113 "compare": false, 00:16:23.113 "compare_and_write": false, 00:16:23.113 "abort": true, 00:16:23.113 "seek_hole": false, 00:16:23.113 "seek_data": false, 00:16:23.113 "copy": true, 00:16:23.113 "nvme_iov_md": false 00:16:23.113 }, 00:16:23.113 "memory_domains": [ 00:16:23.113 { 00:16:23.113 "dma_device_id": "system", 00:16:23.113 "dma_device_type": 1 00:16:23.113 }, 00:16:23.113 { 00:16:23.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.113 "dma_device_type": 2 00:16:23.113 } 00:16:23.113 ], 00:16:23.113 "driver_specific": {} 00:16:23.113 } 00:16:23.113 ] 00:16:23.113 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.113 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:23.113 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:23.113 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.113 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.113 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.113 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.113 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.113 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.113 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.113 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.113 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.113 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.113 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.113 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.113 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.113 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.373 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.373 "name": "Existed_Raid", 00:16:23.373 "uuid": "38783967-9a5d-4f79-929b-005d51a93e3e", 00:16:23.373 "strip_size_kb": 64, 00:16:23.373 "state": "online", 00:16:23.373 "raid_level": "raid5f", 00:16:23.373 "superblock": true, 00:16:23.373 "num_base_bdevs": 4, 00:16:23.373 "num_base_bdevs_discovered": 4, 00:16:23.373 "num_base_bdevs_operational": 4, 00:16:23.373 "base_bdevs_list": [ 00:16:23.373 { 00:16:23.373 "name": "NewBaseBdev", 00:16:23.373 "uuid": "9aa82b8b-617e-4760-a21d-02a22b517463", 00:16:23.373 "is_configured": true, 00:16:23.373 "data_offset": 2048, 00:16:23.373 "data_size": 63488 00:16:23.373 }, 00:16:23.373 { 00:16:23.373 "name": "BaseBdev2", 00:16:23.373 "uuid": "64a9f031-8ee7-4174-80ef-c414c431730f", 00:16:23.373 "is_configured": true, 00:16:23.373 "data_offset": 2048, 00:16:23.373 "data_size": 63488 00:16:23.373 }, 00:16:23.373 { 00:16:23.373 "name": "BaseBdev3", 00:16:23.373 "uuid": "82ce1fbb-1d04-4146-aba9-cd6c1d8b1533", 00:16:23.373 "is_configured": true, 00:16:23.373 "data_offset": 2048, 00:16:23.373 "data_size": 63488 00:16:23.373 }, 00:16:23.373 { 00:16:23.373 "name": "BaseBdev4", 00:16:23.373 "uuid": "e728fff6-9b26-4b20-8189-fa2cfc038c2f", 00:16:23.373 "is_configured": true, 00:16:23.373 "data_offset": 2048, 00:16:23.373 "data_size": 63488 00:16:23.373 } 00:16:23.373 ] 00:16:23.373 }' 00:16:23.373 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.373 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.633 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:23.633 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:23.633 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:23.633 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:23.633 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:23.633 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:23.633 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:23.633 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.633 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.633 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:23.633 [2024-11-15 11:01:30.471467] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:23.633 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.633 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:23.633 "name": "Existed_Raid", 00:16:23.633 "aliases": [ 00:16:23.633 "38783967-9a5d-4f79-929b-005d51a93e3e" 00:16:23.633 ], 00:16:23.633 "product_name": "Raid Volume", 00:16:23.633 "block_size": 512, 00:16:23.633 "num_blocks": 190464, 00:16:23.633 "uuid": "38783967-9a5d-4f79-929b-005d51a93e3e", 00:16:23.633 "assigned_rate_limits": { 00:16:23.633 "rw_ios_per_sec": 0, 00:16:23.633 "rw_mbytes_per_sec": 0, 00:16:23.633 "r_mbytes_per_sec": 0, 00:16:23.633 "w_mbytes_per_sec": 0 00:16:23.633 }, 00:16:23.633 "claimed": false, 00:16:23.633 "zoned": false, 00:16:23.633 "supported_io_types": { 00:16:23.633 "read": true, 00:16:23.633 "write": true, 00:16:23.633 "unmap": false, 00:16:23.633 "flush": false, 00:16:23.633 "reset": true, 00:16:23.633 "nvme_admin": false, 00:16:23.633 "nvme_io": false, 00:16:23.633 "nvme_io_md": false, 00:16:23.633 "write_zeroes": true, 00:16:23.633 "zcopy": false, 00:16:23.633 "get_zone_info": false, 00:16:23.633 "zone_management": false, 00:16:23.633 "zone_append": false, 00:16:23.633 "compare": false, 00:16:23.633 "compare_and_write": false, 00:16:23.633 "abort": false, 00:16:23.633 "seek_hole": false, 00:16:23.633 "seek_data": false, 00:16:23.633 "copy": false, 00:16:23.633 "nvme_iov_md": false 00:16:23.633 }, 00:16:23.633 "driver_specific": { 00:16:23.633 "raid": { 00:16:23.633 "uuid": "38783967-9a5d-4f79-929b-005d51a93e3e", 00:16:23.633 "strip_size_kb": 64, 00:16:23.633 "state": "online", 00:16:23.633 "raid_level": "raid5f", 00:16:23.633 "superblock": true, 00:16:23.633 "num_base_bdevs": 4, 00:16:23.633 "num_base_bdevs_discovered": 4, 00:16:23.633 "num_base_bdevs_operational": 4, 00:16:23.633 "base_bdevs_list": [ 00:16:23.633 { 00:16:23.633 "name": "NewBaseBdev", 00:16:23.633 "uuid": "9aa82b8b-617e-4760-a21d-02a22b517463", 00:16:23.633 "is_configured": true, 00:16:23.633 "data_offset": 2048, 00:16:23.633 "data_size": 63488 00:16:23.633 }, 00:16:23.633 { 00:16:23.633 "name": "BaseBdev2", 00:16:23.633 "uuid": "64a9f031-8ee7-4174-80ef-c414c431730f", 00:16:23.633 "is_configured": true, 00:16:23.633 "data_offset": 2048, 00:16:23.633 "data_size": 63488 00:16:23.633 }, 00:16:23.633 { 00:16:23.633 "name": "BaseBdev3", 00:16:23.633 "uuid": "82ce1fbb-1d04-4146-aba9-cd6c1d8b1533", 00:16:23.633 "is_configured": true, 00:16:23.633 "data_offset": 2048, 00:16:23.633 "data_size": 63488 00:16:23.633 }, 00:16:23.633 { 00:16:23.633 "name": "BaseBdev4", 00:16:23.633 "uuid": "e728fff6-9b26-4b20-8189-fa2cfc038c2f", 00:16:23.633 "is_configured": true, 00:16:23.633 "data_offset": 2048, 00:16:23.633 "data_size": 63488 00:16:23.633 } 00:16:23.633 ] 00:16:23.633 } 00:16:23.633 } 00:16:23.633 }' 00:16:23.633 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:23.894 BaseBdev2 00:16:23.894 BaseBdev3 00:16:23.894 BaseBdev4' 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:23.894 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.895 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.895 [2024-11-15 11:01:30.814607] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:23.895 [2024-11-15 11:01:30.814680] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:23.895 [2024-11-15 11:01:30.814768] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:23.895 [2024-11-15 11:01:30.815091] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:23.895 [2024-11-15 11:01:30.815145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:23.895 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.895 11:01:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83619 00:16:23.895 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 83619 ']' 00:16:24.154 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 83619 00:16:24.154 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:16:24.154 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:24.154 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83619 00:16:24.154 killing process with pid 83619 00:16:24.154 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:24.154 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:24.154 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83619' 00:16:24.154 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 83619 00:16:24.154 [2024-11-15 11:01:30.861857] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:24.154 11:01:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 83619 00:16:24.413 [2024-11-15 11:01:31.272013] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:25.793 11:01:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:25.793 00:16:25.793 real 0m11.292s 00:16:25.793 user 0m17.865s 00:16:25.793 sys 0m2.029s 00:16:25.793 11:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:25.793 ************************************ 00:16:25.793 END TEST raid5f_state_function_test_sb 00:16:25.793 ************************************ 00:16:25.793 11:01:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.793 11:01:32 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:25.793 11:01:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:16:25.793 11:01:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:25.793 11:01:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:25.793 ************************************ 00:16:25.793 START TEST raid5f_superblock_test 00:16:25.793 ************************************ 00:16:25.793 11:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 4 00:16:25.793 11:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:25.793 11:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:25.793 11:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:25.793 11:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:25.793 11:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:25.793 11:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:25.793 11:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:25.793 11:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:25.793 11:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:25.793 11:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:25.793 11:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:25.793 11:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:25.793 11:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:25.793 11:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:25.793 11:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:25.793 11:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:25.793 11:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84284 00:16:25.794 11:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:25.794 11:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84284 00:16:25.794 11:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 84284 ']' 00:16:25.794 11:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.794 11:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:25.794 11:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.794 11:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:25.794 11:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.794 [2024-11-15 11:01:32.526231] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:16:25.794 [2024-11-15 11:01:32.526459] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84284 ] 00:16:25.794 [2024-11-15 11:01:32.696848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.053 [2024-11-15 11:01:32.813349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.313 [2024-11-15 11:01:33.009021] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.313 [2024-11-15 11:01:33.009082] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.573 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:26.573 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:16:26.573 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.574 malloc1 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.574 [2024-11-15 11:01:33.401382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:26.574 [2024-11-15 11:01:33.401492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.574 [2024-11-15 11:01:33.401557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:26.574 [2024-11-15 11:01:33.401601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.574 [2024-11-15 11:01:33.403783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.574 [2024-11-15 11:01:33.403855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:26.574 pt1 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.574 malloc2 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.574 [2024-11-15 11:01:33.462531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:26.574 [2024-11-15 11:01:33.462632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.574 [2024-11-15 11:01:33.462658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:26.574 [2024-11-15 11:01:33.462667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.574 [2024-11-15 11:01:33.465011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.574 [2024-11-15 11:01:33.465051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:26.574 pt2 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.574 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.834 malloc3 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.835 [2024-11-15 11:01:33.531662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:26.835 [2024-11-15 11:01:33.531759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.835 [2024-11-15 11:01:33.531800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:26.835 [2024-11-15 11:01:33.531829] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.835 [2024-11-15 11:01:33.534053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.835 [2024-11-15 11:01:33.534127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:26.835 pt3 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.835 malloc4 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.835 [2024-11-15 11:01:33.590612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:26.835 [2024-11-15 11:01:33.590661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.835 [2024-11-15 11:01:33.590679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:26.835 [2024-11-15 11:01:33.590688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.835 [2024-11-15 11:01:33.592732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.835 [2024-11-15 11:01:33.592767] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:26.835 pt4 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.835 [2024-11-15 11:01:33.602621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:26.835 [2024-11-15 11:01:33.604419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:26.835 [2024-11-15 11:01:33.604482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:26.835 [2024-11-15 11:01:33.604542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:26.835 [2024-11-15 11:01:33.604756] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:26.835 [2024-11-15 11:01:33.604779] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:26.835 [2024-11-15 11:01:33.605023] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:26.835 [2024-11-15 11:01:33.612814] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:26.835 [2024-11-15 11:01:33.612841] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:26.835 [2024-11-15 11:01:33.613014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.835 "name": "raid_bdev1", 00:16:26.835 "uuid": "5be4f8f3-fc08-40a6-9e0e-107e4769ab6e", 00:16:26.835 "strip_size_kb": 64, 00:16:26.835 "state": "online", 00:16:26.835 "raid_level": "raid5f", 00:16:26.835 "superblock": true, 00:16:26.835 "num_base_bdevs": 4, 00:16:26.835 "num_base_bdevs_discovered": 4, 00:16:26.835 "num_base_bdevs_operational": 4, 00:16:26.835 "base_bdevs_list": [ 00:16:26.835 { 00:16:26.835 "name": "pt1", 00:16:26.835 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:26.835 "is_configured": true, 00:16:26.835 "data_offset": 2048, 00:16:26.835 "data_size": 63488 00:16:26.835 }, 00:16:26.835 { 00:16:26.835 "name": "pt2", 00:16:26.835 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:26.835 "is_configured": true, 00:16:26.835 "data_offset": 2048, 00:16:26.835 "data_size": 63488 00:16:26.835 }, 00:16:26.835 { 00:16:26.835 "name": "pt3", 00:16:26.835 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:26.835 "is_configured": true, 00:16:26.835 "data_offset": 2048, 00:16:26.835 "data_size": 63488 00:16:26.835 }, 00:16:26.835 { 00:16:26.835 "name": "pt4", 00:16:26.835 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:26.835 "is_configured": true, 00:16:26.835 "data_offset": 2048, 00:16:26.835 "data_size": 63488 00:16:26.835 } 00:16:26.835 ] 00:16:26.835 }' 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.835 11:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.405 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:27.405 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:27.405 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:27.405 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:27.405 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:27.405 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:27.405 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:27.405 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:27.405 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.405 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.405 [2024-11-15 11:01:34.069136] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.405 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.405 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:27.405 "name": "raid_bdev1", 00:16:27.405 "aliases": [ 00:16:27.405 "5be4f8f3-fc08-40a6-9e0e-107e4769ab6e" 00:16:27.405 ], 00:16:27.405 "product_name": "Raid Volume", 00:16:27.405 "block_size": 512, 00:16:27.405 "num_blocks": 190464, 00:16:27.405 "uuid": "5be4f8f3-fc08-40a6-9e0e-107e4769ab6e", 00:16:27.405 "assigned_rate_limits": { 00:16:27.405 "rw_ios_per_sec": 0, 00:16:27.405 "rw_mbytes_per_sec": 0, 00:16:27.405 "r_mbytes_per_sec": 0, 00:16:27.405 "w_mbytes_per_sec": 0 00:16:27.405 }, 00:16:27.405 "claimed": false, 00:16:27.405 "zoned": false, 00:16:27.405 "supported_io_types": { 00:16:27.405 "read": true, 00:16:27.405 "write": true, 00:16:27.405 "unmap": false, 00:16:27.405 "flush": false, 00:16:27.405 "reset": true, 00:16:27.405 "nvme_admin": false, 00:16:27.405 "nvme_io": false, 00:16:27.405 "nvme_io_md": false, 00:16:27.405 "write_zeroes": true, 00:16:27.405 "zcopy": false, 00:16:27.405 "get_zone_info": false, 00:16:27.405 "zone_management": false, 00:16:27.405 "zone_append": false, 00:16:27.405 "compare": false, 00:16:27.405 "compare_and_write": false, 00:16:27.405 "abort": false, 00:16:27.405 "seek_hole": false, 00:16:27.405 "seek_data": false, 00:16:27.405 "copy": false, 00:16:27.405 "nvme_iov_md": false 00:16:27.406 }, 00:16:27.406 "driver_specific": { 00:16:27.406 "raid": { 00:16:27.406 "uuid": "5be4f8f3-fc08-40a6-9e0e-107e4769ab6e", 00:16:27.406 "strip_size_kb": 64, 00:16:27.406 "state": "online", 00:16:27.406 "raid_level": "raid5f", 00:16:27.406 "superblock": true, 00:16:27.406 "num_base_bdevs": 4, 00:16:27.406 "num_base_bdevs_discovered": 4, 00:16:27.406 "num_base_bdevs_operational": 4, 00:16:27.406 "base_bdevs_list": [ 00:16:27.406 { 00:16:27.406 "name": "pt1", 00:16:27.406 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:27.406 "is_configured": true, 00:16:27.406 "data_offset": 2048, 00:16:27.406 "data_size": 63488 00:16:27.406 }, 00:16:27.406 { 00:16:27.406 "name": "pt2", 00:16:27.406 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:27.406 "is_configured": true, 00:16:27.406 "data_offset": 2048, 00:16:27.406 "data_size": 63488 00:16:27.406 }, 00:16:27.406 { 00:16:27.406 "name": "pt3", 00:16:27.406 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:27.406 "is_configured": true, 00:16:27.406 "data_offset": 2048, 00:16:27.406 "data_size": 63488 00:16:27.406 }, 00:16:27.406 { 00:16:27.406 "name": "pt4", 00:16:27.406 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:27.406 "is_configured": true, 00:16:27.406 "data_offset": 2048, 00:16:27.406 "data_size": 63488 00:16:27.406 } 00:16:27.406 ] 00:16:27.406 } 00:16:27.406 } 00:16:27.406 }' 00:16:27.406 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:27.406 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:27.406 pt2 00:16:27.406 pt3 00:16:27.406 pt4' 00:16:27.406 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.406 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:27.406 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.406 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:27.406 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.406 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.406 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.406 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.406 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.406 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.406 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.406 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.406 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:27.406 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.406 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.406 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.406 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.406 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.406 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.406 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:27.406 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.406 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.406 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.406 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.406 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.406 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.406 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.666 [2024-11-15 11:01:34.376585] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5be4f8f3-fc08-40a6-9e0e-107e4769ab6e 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5be4f8f3-fc08-40a6-9e0e-107e4769ab6e ']' 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.666 [2024-11-15 11:01:34.408368] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:27.666 [2024-11-15 11:01:34.408393] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:27.666 [2024-11-15 11:01:34.408488] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.666 [2024-11-15 11:01:34.408571] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.666 [2024-11-15 11:01:34.408594] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.666 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.667 [2024-11-15 11:01:34.552139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:27.667 [2024-11-15 11:01:34.554009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:27.667 [2024-11-15 11:01:34.554066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:27.667 [2024-11-15 11:01:34.554100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:27.667 [2024-11-15 11:01:34.554148] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:27.667 [2024-11-15 11:01:34.554190] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:27.667 [2024-11-15 11:01:34.554209] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:27.667 [2024-11-15 11:01:34.554227] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:27.667 [2024-11-15 11:01:34.554241] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:27.667 [2024-11-15 11:01:34.554252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:27.667 request: 00:16:27.667 { 00:16:27.667 "name": "raid_bdev1", 00:16:27.667 "raid_level": "raid5f", 00:16:27.667 "base_bdevs": [ 00:16:27.667 "malloc1", 00:16:27.667 "malloc2", 00:16:27.667 "malloc3", 00:16:27.667 "malloc4" 00:16:27.667 ], 00:16:27.667 "strip_size_kb": 64, 00:16:27.667 "superblock": false, 00:16:27.667 "method": "bdev_raid_create", 00:16:27.667 "req_id": 1 00:16:27.667 } 00:16:27.667 Got JSON-RPC error response 00:16:27.667 response: 00:16:27.667 { 00:16:27.667 "code": -17, 00:16:27.667 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:27.667 } 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.667 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.930 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:27.930 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:27.930 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:27.930 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.930 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.930 [2024-11-15 11:01:34.620026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:27.930 [2024-11-15 11:01:34.620088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.930 [2024-11-15 11:01:34.620107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:27.930 [2024-11-15 11:01:34.620117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.930 [2024-11-15 11:01:34.622604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.930 [2024-11-15 11:01:34.622642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:27.930 [2024-11-15 11:01:34.622725] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:27.930 [2024-11-15 11:01:34.622812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:27.930 pt1 00:16:27.930 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.930 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:27.930 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.930 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:27.930 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.930 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.930 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:27.930 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.930 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.930 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.930 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.930 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.930 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.930 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.930 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.930 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.930 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.930 "name": "raid_bdev1", 00:16:27.930 "uuid": "5be4f8f3-fc08-40a6-9e0e-107e4769ab6e", 00:16:27.930 "strip_size_kb": 64, 00:16:27.930 "state": "configuring", 00:16:27.930 "raid_level": "raid5f", 00:16:27.930 "superblock": true, 00:16:27.930 "num_base_bdevs": 4, 00:16:27.930 "num_base_bdevs_discovered": 1, 00:16:27.930 "num_base_bdevs_operational": 4, 00:16:27.930 "base_bdevs_list": [ 00:16:27.930 { 00:16:27.930 "name": "pt1", 00:16:27.930 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:27.930 "is_configured": true, 00:16:27.930 "data_offset": 2048, 00:16:27.930 "data_size": 63488 00:16:27.930 }, 00:16:27.930 { 00:16:27.930 "name": null, 00:16:27.930 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:27.930 "is_configured": false, 00:16:27.930 "data_offset": 2048, 00:16:27.930 "data_size": 63488 00:16:27.930 }, 00:16:27.930 { 00:16:27.930 "name": null, 00:16:27.930 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:27.930 "is_configured": false, 00:16:27.930 "data_offset": 2048, 00:16:27.930 "data_size": 63488 00:16:27.930 }, 00:16:27.930 { 00:16:27.930 "name": null, 00:16:27.930 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:27.930 "is_configured": false, 00:16:27.930 "data_offset": 2048, 00:16:27.930 "data_size": 63488 00:16:27.930 } 00:16:27.930 ] 00:16:27.930 }' 00:16:27.930 11:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.930 11:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.194 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:28.194 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:28.194 11:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.194 11:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.194 [2024-11-15 11:01:35.107225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:28.194 [2024-11-15 11:01:35.107312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.194 [2024-11-15 11:01:35.107334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:28.194 [2024-11-15 11:01:35.107345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.194 [2024-11-15 11:01:35.107787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.194 [2024-11-15 11:01:35.107808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:28.194 [2024-11-15 11:01:35.107886] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:28.194 [2024-11-15 11:01:35.107911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:28.194 pt2 00:16:28.194 11:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.194 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:28.194 11:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.194 11:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.194 [2024-11-15 11:01:35.115204] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:28.453 11:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.453 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:28.453 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.453 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.453 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.453 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.453 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:28.453 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.453 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.454 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.454 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.454 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.454 11:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.454 11:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.454 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.454 11:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.454 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.454 "name": "raid_bdev1", 00:16:28.454 "uuid": "5be4f8f3-fc08-40a6-9e0e-107e4769ab6e", 00:16:28.454 "strip_size_kb": 64, 00:16:28.454 "state": "configuring", 00:16:28.454 "raid_level": "raid5f", 00:16:28.454 "superblock": true, 00:16:28.454 "num_base_bdevs": 4, 00:16:28.454 "num_base_bdevs_discovered": 1, 00:16:28.454 "num_base_bdevs_operational": 4, 00:16:28.454 "base_bdevs_list": [ 00:16:28.454 { 00:16:28.454 "name": "pt1", 00:16:28.454 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:28.454 "is_configured": true, 00:16:28.454 "data_offset": 2048, 00:16:28.454 "data_size": 63488 00:16:28.454 }, 00:16:28.454 { 00:16:28.454 "name": null, 00:16:28.454 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:28.454 "is_configured": false, 00:16:28.454 "data_offset": 0, 00:16:28.454 "data_size": 63488 00:16:28.454 }, 00:16:28.454 { 00:16:28.454 "name": null, 00:16:28.454 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:28.454 "is_configured": false, 00:16:28.454 "data_offset": 2048, 00:16:28.454 "data_size": 63488 00:16:28.454 }, 00:16:28.454 { 00:16:28.454 "name": null, 00:16:28.454 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:28.454 "is_configured": false, 00:16:28.454 "data_offset": 2048, 00:16:28.454 "data_size": 63488 00:16:28.454 } 00:16:28.454 ] 00:16:28.454 }' 00:16:28.454 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.454 11:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.713 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:28.713 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:28.713 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:28.713 11:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.713 11:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.713 [2024-11-15 11:01:35.610395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:28.713 [2024-11-15 11:01:35.610461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.713 [2024-11-15 11:01:35.610495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:28.713 [2024-11-15 11:01:35.610504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.713 [2024-11-15 11:01:35.610959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.713 [2024-11-15 11:01:35.610981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:28.713 [2024-11-15 11:01:35.611067] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:28.713 [2024-11-15 11:01:35.611087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:28.713 pt2 00:16:28.713 11:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.713 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:28.713 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:28.713 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:28.713 11:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.713 11:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.713 [2024-11-15 11:01:35.622327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:28.713 [2024-11-15 11:01:35.622370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.713 [2024-11-15 11:01:35.622387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:28.713 [2024-11-15 11:01:35.622395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.714 [2024-11-15 11:01:35.622745] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.714 [2024-11-15 11:01:35.622765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:28.714 [2024-11-15 11:01:35.622828] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:28.714 [2024-11-15 11:01:35.622844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:28.714 pt3 00:16:28.714 11:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.714 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:28.714 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:28.714 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:28.714 11:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.714 11:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.714 [2024-11-15 11:01:35.634269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:28.714 [2024-11-15 11:01:35.634323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.714 [2024-11-15 11:01:35.634340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:28.714 [2024-11-15 11:01:35.634347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.714 [2024-11-15 11:01:35.634694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.714 [2024-11-15 11:01:35.634709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:28.714 [2024-11-15 11:01:35.634764] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:28.714 [2024-11-15 11:01:35.634780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:28.714 [2024-11-15 11:01:35.634902] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:28.714 [2024-11-15 11:01:35.634910] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:28.714 [2024-11-15 11:01:35.635120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:28.972 [2024-11-15 11:01:35.642210] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:28.972 [2024-11-15 11:01:35.642237] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:28.972 [2024-11-15 11:01:35.642409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.972 pt4 00:16:28.972 11:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.972 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:28.972 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:28.972 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:28.972 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.972 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.972 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.972 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.973 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:28.973 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.973 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.973 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.973 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.973 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.973 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.973 11:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.973 11:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.973 11:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.973 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.973 "name": "raid_bdev1", 00:16:28.973 "uuid": "5be4f8f3-fc08-40a6-9e0e-107e4769ab6e", 00:16:28.973 "strip_size_kb": 64, 00:16:28.973 "state": "online", 00:16:28.973 "raid_level": "raid5f", 00:16:28.973 "superblock": true, 00:16:28.973 "num_base_bdevs": 4, 00:16:28.973 "num_base_bdevs_discovered": 4, 00:16:28.973 "num_base_bdevs_operational": 4, 00:16:28.973 "base_bdevs_list": [ 00:16:28.973 { 00:16:28.973 "name": "pt1", 00:16:28.973 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:28.973 "is_configured": true, 00:16:28.973 "data_offset": 2048, 00:16:28.973 "data_size": 63488 00:16:28.973 }, 00:16:28.973 { 00:16:28.973 "name": "pt2", 00:16:28.973 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:28.973 "is_configured": true, 00:16:28.973 "data_offset": 2048, 00:16:28.973 "data_size": 63488 00:16:28.973 }, 00:16:28.973 { 00:16:28.973 "name": "pt3", 00:16:28.973 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:28.973 "is_configured": true, 00:16:28.973 "data_offset": 2048, 00:16:28.973 "data_size": 63488 00:16:28.973 }, 00:16:28.973 { 00:16:28.973 "name": "pt4", 00:16:28.973 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:28.973 "is_configured": true, 00:16:28.973 "data_offset": 2048, 00:16:28.973 "data_size": 63488 00:16:28.973 } 00:16:28.973 ] 00:16:28.973 }' 00:16:28.973 11:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.973 11:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.232 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:29.232 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:29.232 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:29.232 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:29.232 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:29.232 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:29.232 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:29.232 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:29.232 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.232 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.232 [2024-11-15 11:01:36.074143] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:29.232 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.232 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:29.232 "name": "raid_bdev1", 00:16:29.232 "aliases": [ 00:16:29.232 "5be4f8f3-fc08-40a6-9e0e-107e4769ab6e" 00:16:29.232 ], 00:16:29.232 "product_name": "Raid Volume", 00:16:29.232 "block_size": 512, 00:16:29.232 "num_blocks": 190464, 00:16:29.232 "uuid": "5be4f8f3-fc08-40a6-9e0e-107e4769ab6e", 00:16:29.232 "assigned_rate_limits": { 00:16:29.232 "rw_ios_per_sec": 0, 00:16:29.232 "rw_mbytes_per_sec": 0, 00:16:29.232 "r_mbytes_per_sec": 0, 00:16:29.232 "w_mbytes_per_sec": 0 00:16:29.232 }, 00:16:29.232 "claimed": false, 00:16:29.232 "zoned": false, 00:16:29.232 "supported_io_types": { 00:16:29.232 "read": true, 00:16:29.232 "write": true, 00:16:29.232 "unmap": false, 00:16:29.232 "flush": false, 00:16:29.232 "reset": true, 00:16:29.232 "nvme_admin": false, 00:16:29.232 "nvme_io": false, 00:16:29.232 "nvme_io_md": false, 00:16:29.232 "write_zeroes": true, 00:16:29.232 "zcopy": false, 00:16:29.232 "get_zone_info": false, 00:16:29.232 "zone_management": false, 00:16:29.232 "zone_append": false, 00:16:29.232 "compare": false, 00:16:29.232 "compare_and_write": false, 00:16:29.232 "abort": false, 00:16:29.232 "seek_hole": false, 00:16:29.232 "seek_data": false, 00:16:29.232 "copy": false, 00:16:29.232 "nvme_iov_md": false 00:16:29.232 }, 00:16:29.232 "driver_specific": { 00:16:29.232 "raid": { 00:16:29.232 "uuid": "5be4f8f3-fc08-40a6-9e0e-107e4769ab6e", 00:16:29.232 "strip_size_kb": 64, 00:16:29.232 "state": "online", 00:16:29.232 "raid_level": "raid5f", 00:16:29.232 "superblock": true, 00:16:29.232 "num_base_bdevs": 4, 00:16:29.232 "num_base_bdevs_discovered": 4, 00:16:29.232 "num_base_bdevs_operational": 4, 00:16:29.232 "base_bdevs_list": [ 00:16:29.232 { 00:16:29.232 "name": "pt1", 00:16:29.232 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:29.232 "is_configured": true, 00:16:29.232 "data_offset": 2048, 00:16:29.232 "data_size": 63488 00:16:29.232 }, 00:16:29.232 { 00:16:29.232 "name": "pt2", 00:16:29.232 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:29.232 "is_configured": true, 00:16:29.232 "data_offset": 2048, 00:16:29.232 "data_size": 63488 00:16:29.232 }, 00:16:29.232 { 00:16:29.232 "name": "pt3", 00:16:29.232 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:29.232 "is_configured": true, 00:16:29.232 "data_offset": 2048, 00:16:29.232 "data_size": 63488 00:16:29.232 }, 00:16:29.232 { 00:16:29.232 "name": "pt4", 00:16:29.232 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:29.232 "is_configured": true, 00:16:29.232 "data_offset": 2048, 00:16:29.232 "data_size": 63488 00:16:29.232 } 00:16:29.232 ] 00:16:29.232 } 00:16:29.232 } 00:16:29.232 }' 00:16:29.232 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:29.492 pt2 00:16:29.492 pt3 00:16:29.492 pt4' 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.492 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.493 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.493 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:29.493 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:29.493 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:29.493 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:29.493 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.493 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.493 [2024-11-15 11:01:36.409616] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:29.752 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.752 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5be4f8f3-fc08-40a6-9e0e-107e4769ab6e '!=' 5be4f8f3-fc08-40a6-9e0e-107e4769ab6e ']' 00:16:29.752 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:29.752 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:29.752 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:29.752 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:29.752 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.752 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.752 [2024-11-15 11:01:36.453406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:29.752 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.752 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:29.752 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.752 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.752 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.752 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.752 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:29.752 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.752 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.752 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.752 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.753 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.753 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.753 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.753 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.753 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.753 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.753 "name": "raid_bdev1", 00:16:29.753 "uuid": "5be4f8f3-fc08-40a6-9e0e-107e4769ab6e", 00:16:29.753 "strip_size_kb": 64, 00:16:29.753 "state": "online", 00:16:29.753 "raid_level": "raid5f", 00:16:29.753 "superblock": true, 00:16:29.753 "num_base_bdevs": 4, 00:16:29.753 "num_base_bdevs_discovered": 3, 00:16:29.753 "num_base_bdevs_operational": 3, 00:16:29.753 "base_bdevs_list": [ 00:16:29.753 { 00:16:29.753 "name": null, 00:16:29.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.753 "is_configured": false, 00:16:29.753 "data_offset": 0, 00:16:29.753 "data_size": 63488 00:16:29.753 }, 00:16:29.753 { 00:16:29.753 "name": "pt2", 00:16:29.753 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:29.753 "is_configured": true, 00:16:29.753 "data_offset": 2048, 00:16:29.753 "data_size": 63488 00:16:29.753 }, 00:16:29.753 { 00:16:29.753 "name": "pt3", 00:16:29.753 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:29.753 "is_configured": true, 00:16:29.753 "data_offset": 2048, 00:16:29.753 "data_size": 63488 00:16:29.753 }, 00:16:29.753 { 00:16:29.753 "name": "pt4", 00:16:29.753 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:29.753 "is_configured": true, 00:16:29.753 "data_offset": 2048, 00:16:29.753 "data_size": 63488 00:16:29.753 } 00:16:29.753 ] 00:16:29.753 }' 00:16:29.753 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.753 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.322 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:30.322 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.322 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.322 [2024-11-15 11:01:36.960512] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:30.322 [2024-11-15 11:01:36.960548] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:30.322 [2024-11-15 11:01:36.960648] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:30.322 [2024-11-15 11:01:36.960742] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:30.322 [2024-11-15 11:01:36.960759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:30.322 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.322 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.322 11:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:30.322 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.322 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.322 11:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.322 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:30.322 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:30.322 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:30.322 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:30.322 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:30.322 11:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.322 11:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.322 11:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.323 [2024-11-15 11:01:37.056309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:30.323 [2024-11-15 11:01:37.056389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.323 [2024-11-15 11:01:37.056416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:30.323 [2024-11-15 11:01:37.056425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.323 [2024-11-15 11:01:37.058559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.323 [2024-11-15 11:01:37.058590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:30.323 [2024-11-15 11:01:37.058672] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:30.323 [2024-11-15 11:01:37.058724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:30.323 pt2 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.323 "name": "raid_bdev1", 00:16:30.323 "uuid": "5be4f8f3-fc08-40a6-9e0e-107e4769ab6e", 00:16:30.323 "strip_size_kb": 64, 00:16:30.323 "state": "configuring", 00:16:30.323 "raid_level": "raid5f", 00:16:30.323 "superblock": true, 00:16:30.323 "num_base_bdevs": 4, 00:16:30.323 "num_base_bdevs_discovered": 1, 00:16:30.323 "num_base_bdevs_operational": 3, 00:16:30.323 "base_bdevs_list": [ 00:16:30.323 { 00:16:30.323 "name": null, 00:16:30.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.323 "is_configured": false, 00:16:30.323 "data_offset": 2048, 00:16:30.323 "data_size": 63488 00:16:30.323 }, 00:16:30.323 { 00:16:30.323 "name": "pt2", 00:16:30.323 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:30.323 "is_configured": true, 00:16:30.323 "data_offset": 2048, 00:16:30.323 "data_size": 63488 00:16:30.323 }, 00:16:30.323 { 00:16:30.323 "name": null, 00:16:30.323 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:30.323 "is_configured": false, 00:16:30.323 "data_offset": 2048, 00:16:30.323 "data_size": 63488 00:16:30.323 }, 00:16:30.323 { 00:16:30.323 "name": null, 00:16:30.323 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:30.323 "is_configured": false, 00:16:30.323 "data_offset": 2048, 00:16:30.323 "data_size": 63488 00:16:30.323 } 00:16:30.323 ] 00:16:30.323 }' 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.323 11:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.893 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:30.894 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:30.894 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:30.894 11:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.894 11:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.894 [2024-11-15 11:01:37.547515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:30.894 [2024-11-15 11:01:37.547582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.894 [2024-11-15 11:01:37.547606] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:30.894 [2024-11-15 11:01:37.547617] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.894 [2024-11-15 11:01:37.548061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.894 [2024-11-15 11:01:37.548079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:30.894 [2024-11-15 11:01:37.548167] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:30.894 [2024-11-15 11:01:37.548195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:30.894 pt3 00:16:30.894 11:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.894 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:30.894 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.894 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.894 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.894 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.894 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.894 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.894 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.894 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.894 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.894 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.894 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.894 11:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.894 11:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.894 11:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.894 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.894 "name": "raid_bdev1", 00:16:30.894 "uuid": "5be4f8f3-fc08-40a6-9e0e-107e4769ab6e", 00:16:30.894 "strip_size_kb": 64, 00:16:30.894 "state": "configuring", 00:16:30.894 "raid_level": "raid5f", 00:16:30.894 "superblock": true, 00:16:30.894 "num_base_bdevs": 4, 00:16:30.894 "num_base_bdevs_discovered": 2, 00:16:30.894 "num_base_bdevs_operational": 3, 00:16:30.894 "base_bdevs_list": [ 00:16:30.894 { 00:16:30.894 "name": null, 00:16:30.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.894 "is_configured": false, 00:16:30.894 "data_offset": 2048, 00:16:30.894 "data_size": 63488 00:16:30.894 }, 00:16:30.894 { 00:16:30.894 "name": "pt2", 00:16:30.894 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:30.894 "is_configured": true, 00:16:30.894 "data_offset": 2048, 00:16:30.894 "data_size": 63488 00:16:30.894 }, 00:16:30.894 { 00:16:30.894 "name": "pt3", 00:16:30.894 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:30.894 "is_configured": true, 00:16:30.894 "data_offset": 2048, 00:16:30.894 "data_size": 63488 00:16:30.894 }, 00:16:30.894 { 00:16:30.894 "name": null, 00:16:30.894 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:30.894 "is_configured": false, 00:16:30.894 "data_offset": 2048, 00:16:30.894 "data_size": 63488 00:16:30.894 } 00:16:30.894 ] 00:16:30.894 }' 00:16:30.894 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.894 11:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.154 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:31.154 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:31.154 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:31.154 11:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:31.154 11:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.154 11:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.154 [2024-11-15 11:01:37.998732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:31.154 [2024-11-15 11:01:37.998796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.154 [2024-11-15 11:01:37.998819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:31.154 [2024-11-15 11:01:37.998827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.154 [2024-11-15 11:01:37.999257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.154 [2024-11-15 11:01:37.999274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:31.154 [2024-11-15 11:01:37.999367] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:31.154 [2024-11-15 11:01:37.999390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:31.154 [2024-11-15 11:01:37.999522] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:31.154 [2024-11-15 11:01:37.999530] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:31.154 [2024-11-15 11:01:37.999763] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:31.154 [2024-11-15 11:01:38.006940] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:31.154 [2024-11-15 11:01:38.006970] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:31.154 pt4 00:16:31.154 [2024-11-15 11:01:38.007280] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.154 11:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.154 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:31.154 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.154 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.154 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.154 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.154 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.154 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.154 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.154 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.154 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.155 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.155 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.155 11:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.155 11:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.155 11:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.155 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.155 "name": "raid_bdev1", 00:16:31.155 "uuid": "5be4f8f3-fc08-40a6-9e0e-107e4769ab6e", 00:16:31.155 "strip_size_kb": 64, 00:16:31.155 "state": "online", 00:16:31.155 "raid_level": "raid5f", 00:16:31.155 "superblock": true, 00:16:31.155 "num_base_bdevs": 4, 00:16:31.155 "num_base_bdevs_discovered": 3, 00:16:31.155 "num_base_bdevs_operational": 3, 00:16:31.155 "base_bdevs_list": [ 00:16:31.155 { 00:16:31.155 "name": null, 00:16:31.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.155 "is_configured": false, 00:16:31.155 "data_offset": 2048, 00:16:31.155 "data_size": 63488 00:16:31.155 }, 00:16:31.155 { 00:16:31.155 "name": "pt2", 00:16:31.155 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:31.155 "is_configured": true, 00:16:31.155 "data_offset": 2048, 00:16:31.155 "data_size": 63488 00:16:31.155 }, 00:16:31.155 { 00:16:31.155 "name": "pt3", 00:16:31.155 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:31.155 "is_configured": true, 00:16:31.155 "data_offset": 2048, 00:16:31.155 "data_size": 63488 00:16:31.155 }, 00:16:31.155 { 00:16:31.155 "name": "pt4", 00:16:31.155 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:31.155 "is_configured": true, 00:16:31.155 "data_offset": 2048, 00:16:31.155 "data_size": 63488 00:16:31.155 } 00:16:31.155 ] 00:16:31.155 }' 00:16:31.155 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.155 11:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.725 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:31.725 11:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.725 11:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.725 [2024-11-15 11:01:38.444129] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:31.726 [2024-11-15 11:01:38.444170] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:31.726 [2024-11-15 11:01:38.444261] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:31.726 [2024-11-15 11:01:38.444363] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:31.726 [2024-11-15 11:01:38.444383] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.726 [2024-11-15 11:01:38.519967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:31.726 [2024-11-15 11:01:38.520039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.726 [2024-11-15 11:01:38.520068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:31.726 [2024-11-15 11:01:38.520080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.726 [2024-11-15 11:01:38.522723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.726 [2024-11-15 11:01:38.522763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:31.726 [2024-11-15 11:01:38.522852] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:31.726 [2024-11-15 11:01:38.522922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:31.726 [2024-11-15 11:01:38.523090] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:31.726 [2024-11-15 11:01:38.523109] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:31.726 [2024-11-15 11:01:38.523128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:31.726 [2024-11-15 11:01:38.523192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:31.726 [2024-11-15 11:01:38.523363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:31.726 pt1 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.726 "name": "raid_bdev1", 00:16:31.726 "uuid": "5be4f8f3-fc08-40a6-9e0e-107e4769ab6e", 00:16:31.726 "strip_size_kb": 64, 00:16:31.726 "state": "configuring", 00:16:31.726 "raid_level": "raid5f", 00:16:31.726 "superblock": true, 00:16:31.726 "num_base_bdevs": 4, 00:16:31.726 "num_base_bdevs_discovered": 2, 00:16:31.726 "num_base_bdevs_operational": 3, 00:16:31.726 "base_bdevs_list": [ 00:16:31.726 { 00:16:31.726 "name": null, 00:16:31.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.726 "is_configured": false, 00:16:31.726 "data_offset": 2048, 00:16:31.726 "data_size": 63488 00:16:31.726 }, 00:16:31.726 { 00:16:31.726 "name": "pt2", 00:16:31.726 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:31.726 "is_configured": true, 00:16:31.726 "data_offset": 2048, 00:16:31.726 "data_size": 63488 00:16:31.726 }, 00:16:31.726 { 00:16:31.726 "name": "pt3", 00:16:31.726 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:31.726 "is_configured": true, 00:16:31.726 "data_offset": 2048, 00:16:31.726 "data_size": 63488 00:16:31.726 }, 00:16:31.726 { 00:16:31.726 "name": null, 00:16:31.726 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:31.726 "is_configured": false, 00:16:31.726 "data_offset": 2048, 00:16:31.726 "data_size": 63488 00:16:31.726 } 00:16:31.726 ] 00:16:31.726 }' 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.726 11:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.296 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:32.296 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:32.296 11:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.296 11:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.296 11:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.296 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:32.296 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:32.296 11:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.296 11:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.296 [2024-11-15 11:01:38.971198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:32.296 [2024-11-15 11:01:38.971263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.296 [2024-11-15 11:01:38.971289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:32.296 [2024-11-15 11:01:38.971308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.296 [2024-11-15 11:01:38.971792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.296 [2024-11-15 11:01:38.971817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:32.296 [2024-11-15 11:01:38.971909] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:32.296 [2024-11-15 11:01:38.971940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:32.296 [2024-11-15 11:01:38.972109] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:32.296 [2024-11-15 11:01:38.972119] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:32.296 [2024-11-15 11:01:38.972413] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:32.296 [2024-11-15 11:01:38.979946] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:32.296 [2024-11-15 11:01:38.979976] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:32.296 [2024-11-15 11:01:38.980248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.296 pt4 00:16:32.296 11:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.296 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:32.296 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.296 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.296 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.296 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.296 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:32.296 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.296 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.296 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.296 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.296 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.296 11:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.296 11:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.296 11:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.296 11:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.296 11:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.296 "name": "raid_bdev1", 00:16:32.296 "uuid": "5be4f8f3-fc08-40a6-9e0e-107e4769ab6e", 00:16:32.296 "strip_size_kb": 64, 00:16:32.296 "state": "online", 00:16:32.296 "raid_level": "raid5f", 00:16:32.296 "superblock": true, 00:16:32.296 "num_base_bdevs": 4, 00:16:32.296 "num_base_bdevs_discovered": 3, 00:16:32.296 "num_base_bdevs_operational": 3, 00:16:32.296 "base_bdevs_list": [ 00:16:32.296 { 00:16:32.296 "name": null, 00:16:32.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.296 "is_configured": false, 00:16:32.296 "data_offset": 2048, 00:16:32.296 "data_size": 63488 00:16:32.296 }, 00:16:32.296 { 00:16:32.296 "name": "pt2", 00:16:32.296 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:32.296 "is_configured": true, 00:16:32.296 "data_offset": 2048, 00:16:32.296 "data_size": 63488 00:16:32.296 }, 00:16:32.296 { 00:16:32.296 "name": "pt3", 00:16:32.296 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:32.296 "is_configured": true, 00:16:32.296 "data_offset": 2048, 00:16:32.296 "data_size": 63488 00:16:32.296 }, 00:16:32.296 { 00:16:32.296 "name": "pt4", 00:16:32.296 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:32.296 "is_configured": true, 00:16:32.296 "data_offset": 2048, 00:16:32.296 "data_size": 63488 00:16:32.296 } 00:16:32.296 ] 00:16:32.296 }' 00:16:32.296 11:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.296 11:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.556 11:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:32.556 11:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:32.556 11:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.556 11:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.556 11:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.556 11:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:32.556 11:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:32.556 11:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:32.556 11:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.556 11:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.556 [2024-11-15 11:01:39.456783] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.556 11:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.816 11:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 5be4f8f3-fc08-40a6-9e0e-107e4769ab6e '!=' 5be4f8f3-fc08-40a6-9e0e-107e4769ab6e ']' 00:16:32.816 11:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84284 00:16:32.816 11:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 84284 ']' 00:16:32.816 11:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 84284 00:16:32.816 11:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:16:32.816 11:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:32.816 11:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84284 00:16:32.816 killing process with pid 84284 00:16:32.816 11:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:32.816 11:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:32.816 11:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84284' 00:16:32.816 11:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 84284 00:16:32.816 [2024-11-15 11:01:39.538763] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:32.816 [2024-11-15 11:01:39.538870] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.816 11:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 84284 00:16:32.816 [2024-11-15 11:01:39.538959] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.816 [2024-11-15 11:01:39.538974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:33.076 [2024-11-15 11:01:39.953420] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:34.457 11:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:34.457 00:16:34.457 real 0m8.630s 00:16:34.457 user 0m13.643s 00:16:34.457 sys 0m1.542s 00:16:34.457 11:01:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:34.457 11:01:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.457 ************************************ 00:16:34.457 END TEST raid5f_superblock_test 00:16:34.457 ************************************ 00:16:34.457 11:01:41 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:34.457 11:01:41 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:34.458 11:01:41 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:16:34.458 11:01:41 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:34.458 11:01:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:34.458 ************************************ 00:16:34.458 START TEST raid5f_rebuild_test 00:16:34.458 ************************************ 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 false false true 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84771 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84771 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 84771 ']' 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:34.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:34.458 11:01:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.458 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:34.458 Zero copy mechanism will not be used. 00:16:34.458 [2024-11-15 11:01:41.234885] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:16:34.458 [2024-11-15 11:01:41.235022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84771 ] 00:16:34.717 [2024-11-15 11:01:41.407293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.717 [2024-11-15 11:01:41.524708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.977 [2024-11-15 11:01:41.732296] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:34.977 [2024-11-15 11:01:41.732378] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.236 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:35.236 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:16:35.236 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:35.236 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:35.236 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.236 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.236 BaseBdev1_malloc 00:16:35.236 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.236 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:35.236 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.236 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.236 [2024-11-15 11:01:42.135006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:35.236 [2024-11-15 11:01:42.135081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.236 [2024-11-15 11:01:42.135125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:35.236 [2024-11-15 11:01:42.135139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.236 [2024-11-15 11:01:42.137634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.236 [2024-11-15 11:01:42.137682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:35.236 BaseBdev1 00:16:35.236 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.236 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:35.236 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:35.236 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.236 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.496 BaseBdev2_malloc 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.496 [2024-11-15 11:01:42.194845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:35.496 [2024-11-15 11:01:42.194918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.496 [2024-11-15 11:01:42.194939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:35.496 [2024-11-15 11:01:42.194953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.496 [2024-11-15 11:01:42.197408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.496 [2024-11-15 11:01:42.197456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:35.496 BaseBdev2 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.496 BaseBdev3_malloc 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.496 [2024-11-15 11:01:42.266325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:35.496 [2024-11-15 11:01:42.266388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.496 [2024-11-15 11:01:42.266412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:35.496 [2024-11-15 11:01:42.266423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.496 [2024-11-15 11:01:42.268723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.496 [2024-11-15 11:01:42.268773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:35.496 BaseBdev3 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.496 BaseBdev4_malloc 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.496 [2024-11-15 11:01:42.325609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:35.496 [2024-11-15 11:01:42.325689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.496 [2024-11-15 11:01:42.325711] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:35.496 [2024-11-15 11:01:42.325724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.496 [2024-11-15 11:01:42.328211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.496 [2024-11-15 11:01:42.328258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:35.496 BaseBdev4 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.496 spare_malloc 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.496 spare_delay 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.496 [2024-11-15 11:01:42.395693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:35.496 [2024-11-15 11:01:42.395763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.496 [2024-11-15 11:01:42.395804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:35.496 [2024-11-15 11:01:42.395817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.496 [2024-11-15 11:01:42.398291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.496 [2024-11-15 11:01:42.398347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:35.496 spare 00:16:35.496 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.497 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:35.497 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.497 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.497 [2024-11-15 11:01:42.407708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:35.497 [2024-11-15 11:01:42.409792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:35.497 [2024-11-15 11:01:42.409873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:35.497 [2024-11-15 11:01:42.409938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:35.497 [2024-11-15 11:01:42.410065] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:35.497 [2024-11-15 11:01:42.410087] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:35.497 [2024-11-15 11:01:42.410399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:35.497 [2024-11-15 11:01:42.419105] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:35.497 [2024-11-15 11:01:42.419135] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:35.497 [2024-11-15 11:01:42.419426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.497 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.497 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:35.497 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.497 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.497 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.761 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.761 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:35.761 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.761 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.761 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.761 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.761 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.761 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.761 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.761 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.761 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.761 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.761 "name": "raid_bdev1", 00:16:35.761 "uuid": "0a94035f-befa-4b80-a651-a9f1b55cc1cd", 00:16:35.761 "strip_size_kb": 64, 00:16:35.761 "state": "online", 00:16:35.761 "raid_level": "raid5f", 00:16:35.761 "superblock": false, 00:16:35.761 "num_base_bdevs": 4, 00:16:35.761 "num_base_bdevs_discovered": 4, 00:16:35.761 "num_base_bdevs_operational": 4, 00:16:35.761 "base_bdevs_list": [ 00:16:35.761 { 00:16:35.761 "name": "BaseBdev1", 00:16:35.761 "uuid": "47547be3-8e6f-587b-ad5b-c67d5fd7e6d3", 00:16:35.761 "is_configured": true, 00:16:35.761 "data_offset": 0, 00:16:35.761 "data_size": 65536 00:16:35.761 }, 00:16:35.761 { 00:16:35.761 "name": "BaseBdev2", 00:16:35.761 "uuid": "eddf3a15-a03a-5709-97ab-8f4061bf9e3d", 00:16:35.761 "is_configured": true, 00:16:35.761 "data_offset": 0, 00:16:35.761 "data_size": 65536 00:16:35.761 }, 00:16:35.761 { 00:16:35.761 "name": "BaseBdev3", 00:16:35.761 "uuid": "0fdec842-a75f-5159-84e1-fea09217de0b", 00:16:35.761 "is_configured": true, 00:16:35.761 "data_offset": 0, 00:16:35.761 "data_size": 65536 00:16:35.761 }, 00:16:35.761 { 00:16:35.762 "name": "BaseBdev4", 00:16:35.762 "uuid": "353838e6-7549-5528-9a4a-1b9463c2d689", 00:16:35.762 "is_configured": true, 00:16:35.762 "data_offset": 0, 00:16:35.762 "data_size": 65536 00:16:35.762 } 00:16:35.762 ] 00:16:35.762 }' 00:16:35.762 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.762 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.022 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:36.022 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.022 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.022 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:36.022 [2024-11-15 11:01:42.812757] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.022 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.022 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:36.022 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:36.022 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.022 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.022 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.022 11:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.022 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:36.022 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:36.022 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:36.022 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:36.022 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:36.022 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:36.022 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:36.022 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:36.022 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:36.022 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:36.022 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:36.022 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:36.022 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:36.022 11:01:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:36.282 [2024-11-15 11:01:43.064086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:36.282 /dev/nbd0 00:16:36.282 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:36.282 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:36.282 11:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:36.282 11:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:16:36.282 11:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:36.282 11:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:36.282 11:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:36.282 11:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:16:36.282 11:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:36.282 11:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:36.282 11:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:36.282 1+0 records in 00:16:36.282 1+0 records out 00:16:36.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316808 s, 12.9 MB/s 00:16:36.282 11:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:36.282 11:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:16:36.282 11:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:36.282 11:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:36.282 11:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:16:36.282 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:36.282 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:36.282 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:36.282 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:36.282 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:36.282 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:36.852 512+0 records in 00:16:36.852 512+0 records out 00:16:36.852 100663296 bytes (101 MB, 96 MiB) copied, 0.512773 s, 196 MB/s 00:16:36.852 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:36.852 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:36.852 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:36.852 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:36.852 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:36.852 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:36.852 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:37.112 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:37.112 [2024-11-15 11:01:43.951557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.112 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:37.112 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:37.112 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:37.112 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:37.112 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:37.112 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:37.112 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:37.112 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:37.112 11:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.112 11:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.112 [2024-11-15 11:01:43.966514] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:37.112 11:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.112 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:37.112 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.112 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.112 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.112 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.112 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:37.112 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.112 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.112 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.112 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.112 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.112 11:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.112 11:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.112 11:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.112 11:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.112 11:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.112 "name": "raid_bdev1", 00:16:37.112 "uuid": "0a94035f-befa-4b80-a651-a9f1b55cc1cd", 00:16:37.112 "strip_size_kb": 64, 00:16:37.112 "state": "online", 00:16:37.112 "raid_level": "raid5f", 00:16:37.112 "superblock": false, 00:16:37.112 "num_base_bdevs": 4, 00:16:37.112 "num_base_bdevs_discovered": 3, 00:16:37.112 "num_base_bdevs_operational": 3, 00:16:37.112 "base_bdevs_list": [ 00:16:37.112 { 00:16:37.112 "name": null, 00:16:37.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.112 "is_configured": false, 00:16:37.112 "data_offset": 0, 00:16:37.112 "data_size": 65536 00:16:37.112 }, 00:16:37.112 { 00:16:37.112 "name": "BaseBdev2", 00:16:37.112 "uuid": "eddf3a15-a03a-5709-97ab-8f4061bf9e3d", 00:16:37.112 "is_configured": true, 00:16:37.112 "data_offset": 0, 00:16:37.112 "data_size": 65536 00:16:37.112 }, 00:16:37.112 { 00:16:37.112 "name": "BaseBdev3", 00:16:37.112 "uuid": "0fdec842-a75f-5159-84e1-fea09217de0b", 00:16:37.112 "is_configured": true, 00:16:37.112 "data_offset": 0, 00:16:37.112 "data_size": 65536 00:16:37.112 }, 00:16:37.112 { 00:16:37.112 "name": "BaseBdev4", 00:16:37.112 "uuid": "353838e6-7549-5528-9a4a-1b9463c2d689", 00:16:37.112 "is_configured": true, 00:16:37.112 "data_offset": 0, 00:16:37.112 "data_size": 65536 00:16:37.112 } 00:16:37.112 ] 00:16:37.112 }' 00:16:37.112 11:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.112 11:01:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.682 11:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:37.682 11:01:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.682 11:01:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.682 [2024-11-15 11:01:44.445734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:37.682 [2024-11-15 11:01:44.462407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:37.682 11:01:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.682 11:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:37.682 [2024-11-15 11:01:44.472567] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:38.623 11:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.623 11:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.623 11:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.623 11:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.623 11:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.623 11:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.623 11:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.623 11:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.623 11:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.623 11:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.623 11:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.623 "name": "raid_bdev1", 00:16:38.623 "uuid": "0a94035f-befa-4b80-a651-a9f1b55cc1cd", 00:16:38.623 "strip_size_kb": 64, 00:16:38.623 "state": "online", 00:16:38.623 "raid_level": "raid5f", 00:16:38.623 "superblock": false, 00:16:38.623 "num_base_bdevs": 4, 00:16:38.623 "num_base_bdevs_discovered": 4, 00:16:38.623 "num_base_bdevs_operational": 4, 00:16:38.623 "process": { 00:16:38.623 "type": "rebuild", 00:16:38.623 "target": "spare", 00:16:38.623 "progress": { 00:16:38.623 "blocks": 19200, 00:16:38.623 "percent": 9 00:16:38.623 } 00:16:38.623 }, 00:16:38.623 "base_bdevs_list": [ 00:16:38.623 { 00:16:38.624 "name": "spare", 00:16:38.624 "uuid": "3de0cc47-f589-5471-aa29-b85308bdc2da", 00:16:38.624 "is_configured": true, 00:16:38.624 "data_offset": 0, 00:16:38.624 "data_size": 65536 00:16:38.624 }, 00:16:38.624 { 00:16:38.624 "name": "BaseBdev2", 00:16:38.624 "uuid": "eddf3a15-a03a-5709-97ab-8f4061bf9e3d", 00:16:38.624 "is_configured": true, 00:16:38.624 "data_offset": 0, 00:16:38.624 "data_size": 65536 00:16:38.624 }, 00:16:38.624 { 00:16:38.624 "name": "BaseBdev3", 00:16:38.624 "uuid": "0fdec842-a75f-5159-84e1-fea09217de0b", 00:16:38.624 "is_configured": true, 00:16:38.624 "data_offset": 0, 00:16:38.624 "data_size": 65536 00:16:38.624 }, 00:16:38.624 { 00:16:38.624 "name": "BaseBdev4", 00:16:38.624 "uuid": "353838e6-7549-5528-9a4a-1b9463c2d689", 00:16:38.624 "is_configured": true, 00:16:38.624 "data_offset": 0, 00:16:38.624 "data_size": 65536 00:16:38.624 } 00:16:38.624 ] 00:16:38.624 }' 00:16:38.624 11:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.883 11:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:38.883 11:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.883 11:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.883 11:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:38.883 11:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.883 11:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.883 [2024-11-15 11:01:45.623938] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:38.883 [2024-11-15 11:01:45.681219] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:38.883 [2024-11-15 11:01:45.681313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.883 [2024-11-15 11:01:45.681334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:38.883 [2024-11-15 11:01:45.681344] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:38.883 11:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.883 11:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:38.883 11:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.883 11:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.883 11:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.883 11:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.883 11:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:38.883 11:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.883 11:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.883 11:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.883 11:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.883 11:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.883 11:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.883 11:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.883 11:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.883 11:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.883 11:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.883 "name": "raid_bdev1", 00:16:38.883 "uuid": "0a94035f-befa-4b80-a651-a9f1b55cc1cd", 00:16:38.883 "strip_size_kb": 64, 00:16:38.883 "state": "online", 00:16:38.883 "raid_level": "raid5f", 00:16:38.883 "superblock": false, 00:16:38.883 "num_base_bdevs": 4, 00:16:38.883 "num_base_bdevs_discovered": 3, 00:16:38.883 "num_base_bdevs_operational": 3, 00:16:38.883 "base_bdevs_list": [ 00:16:38.883 { 00:16:38.883 "name": null, 00:16:38.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.883 "is_configured": false, 00:16:38.883 "data_offset": 0, 00:16:38.883 "data_size": 65536 00:16:38.883 }, 00:16:38.883 { 00:16:38.883 "name": "BaseBdev2", 00:16:38.883 "uuid": "eddf3a15-a03a-5709-97ab-8f4061bf9e3d", 00:16:38.883 "is_configured": true, 00:16:38.883 "data_offset": 0, 00:16:38.883 "data_size": 65536 00:16:38.883 }, 00:16:38.883 { 00:16:38.883 "name": "BaseBdev3", 00:16:38.883 "uuid": "0fdec842-a75f-5159-84e1-fea09217de0b", 00:16:38.883 "is_configured": true, 00:16:38.883 "data_offset": 0, 00:16:38.883 "data_size": 65536 00:16:38.883 }, 00:16:38.883 { 00:16:38.883 "name": "BaseBdev4", 00:16:38.883 "uuid": "353838e6-7549-5528-9a4a-1b9463c2d689", 00:16:38.883 "is_configured": true, 00:16:38.883 "data_offset": 0, 00:16:38.883 "data_size": 65536 00:16:38.883 } 00:16:38.883 ] 00:16:38.883 }' 00:16:38.883 11:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.883 11:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.451 11:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:39.451 11:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.451 11:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:39.451 11:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:39.451 11:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.451 11:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.451 11:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.451 11:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.451 11:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.451 11:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.451 11:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.451 "name": "raid_bdev1", 00:16:39.451 "uuid": "0a94035f-befa-4b80-a651-a9f1b55cc1cd", 00:16:39.451 "strip_size_kb": 64, 00:16:39.451 "state": "online", 00:16:39.451 "raid_level": "raid5f", 00:16:39.451 "superblock": false, 00:16:39.451 "num_base_bdevs": 4, 00:16:39.451 "num_base_bdevs_discovered": 3, 00:16:39.451 "num_base_bdevs_operational": 3, 00:16:39.451 "base_bdevs_list": [ 00:16:39.451 { 00:16:39.451 "name": null, 00:16:39.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.451 "is_configured": false, 00:16:39.451 "data_offset": 0, 00:16:39.451 "data_size": 65536 00:16:39.451 }, 00:16:39.451 { 00:16:39.451 "name": "BaseBdev2", 00:16:39.451 "uuid": "eddf3a15-a03a-5709-97ab-8f4061bf9e3d", 00:16:39.451 "is_configured": true, 00:16:39.451 "data_offset": 0, 00:16:39.451 "data_size": 65536 00:16:39.451 }, 00:16:39.451 { 00:16:39.451 "name": "BaseBdev3", 00:16:39.451 "uuid": "0fdec842-a75f-5159-84e1-fea09217de0b", 00:16:39.451 "is_configured": true, 00:16:39.451 "data_offset": 0, 00:16:39.451 "data_size": 65536 00:16:39.451 }, 00:16:39.451 { 00:16:39.451 "name": "BaseBdev4", 00:16:39.451 "uuid": "353838e6-7549-5528-9a4a-1b9463c2d689", 00:16:39.451 "is_configured": true, 00:16:39.451 "data_offset": 0, 00:16:39.451 "data_size": 65536 00:16:39.451 } 00:16:39.451 ] 00:16:39.451 }' 00:16:39.451 11:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.451 11:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:39.451 11:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.451 11:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:39.451 11:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:39.451 11:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.451 11:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.451 [2024-11-15 11:01:46.355653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:39.451 [2024-11-15 11:01:46.372075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:16:39.451 11:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.451 11:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:39.710 [2024-11-15 11:01:46.383194] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:40.666 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.666 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.666 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.666 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.666 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.666 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.666 11:01:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.666 11:01:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.666 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.666 11:01:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.666 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.666 "name": "raid_bdev1", 00:16:40.666 "uuid": "0a94035f-befa-4b80-a651-a9f1b55cc1cd", 00:16:40.666 "strip_size_kb": 64, 00:16:40.666 "state": "online", 00:16:40.666 "raid_level": "raid5f", 00:16:40.666 "superblock": false, 00:16:40.666 "num_base_bdevs": 4, 00:16:40.666 "num_base_bdevs_discovered": 4, 00:16:40.666 "num_base_bdevs_operational": 4, 00:16:40.666 "process": { 00:16:40.666 "type": "rebuild", 00:16:40.666 "target": "spare", 00:16:40.666 "progress": { 00:16:40.666 "blocks": 17280, 00:16:40.666 "percent": 8 00:16:40.666 } 00:16:40.666 }, 00:16:40.666 "base_bdevs_list": [ 00:16:40.666 { 00:16:40.666 "name": "spare", 00:16:40.667 "uuid": "3de0cc47-f589-5471-aa29-b85308bdc2da", 00:16:40.667 "is_configured": true, 00:16:40.667 "data_offset": 0, 00:16:40.667 "data_size": 65536 00:16:40.667 }, 00:16:40.667 { 00:16:40.667 "name": "BaseBdev2", 00:16:40.667 "uuid": "eddf3a15-a03a-5709-97ab-8f4061bf9e3d", 00:16:40.667 "is_configured": true, 00:16:40.667 "data_offset": 0, 00:16:40.667 "data_size": 65536 00:16:40.667 }, 00:16:40.667 { 00:16:40.667 "name": "BaseBdev3", 00:16:40.667 "uuid": "0fdec842-a75f-5159-84e1-fea09217de0b", 00:16:40.667 "is_configured": true, 00:16:40.667 "data_offset": 0, 00:16:40.667 "data_size": 65536 00:16:40.667 }, 00:16:40.667 { 00:16:40.667 "name": "BaseBdev4", 00:16:40.667 "uuid": "353838e6-7549-5528-9a4a-1b9463c2d689", 00:16:40.667 "is_configured": true, 00:16:40.667 "data_offset": 0, 00:16:40.667 "data_size": 65536 00:16:40.667 } 00:16:40.667 ] 00:16:40.667 }' 00:16:40.667 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.667 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.667 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.667 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.667 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:40.667 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:40.667 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:40.667 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=626 00:16:40.667 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:40.667 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.667 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.667 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.667 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.667 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.667 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.667 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.667 11:01:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.667 11:01:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.667 11:01:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.667 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.667 "name": "raid_bdev1", 00:16:40.667 "uuid": "0a94035f-befa-4b80-a651-a9f1b55cc1cd", 00:16:40.667 "strip_size_kb": 64, 00:16:40.667 "state": "online", 00:16:40.667 "raid_level": "raid5f", 00:16:40.667 "superblock": false, 00:16:40.667 "num_base_bdevs": 4, 00:16:40.667 "num_base_bdevs_discovered": 4, 00:16:40.667 "num_base_bdevs_operational": 4, 00:16:40.667 "process": { 00:16:40.667 "type": "rebuild", 00:16:40.667 "target": "spare", 00:16:40.667 "progress": { 00:16:40.667 "blocks": 21120, 00:16:40.667 "percent": 10 00:16:40.667 } 00:16:40.667 }, 00:16:40.667 "base_bdevs_list": [ 00:16:40.667 { 00:16:40.667 "name": "spare", 00:16:40.667 "uuid": "3de0cc47-f589-5471-aa29-b85308bdc2da", 00:16:40.667 "is_configured": true, 00:16:40.667 "data_offset": 0, 00:16:40.667 "data_size": 65536 00:16:40.667 }, 00:16:40.667 { 00:16:40.667 "name": "BaseBdev2", 00:16:40.667 "uuid": "eddf3a15-a03a-5709-97ab-8f4061bf9e3d", 00:16:40.667 "is_configured": true, 00:16:40.667 "data_offset": 0, 00:16:40.667 "data_size": 65536 00:16:40.667 }, 00:16:40.667 { 00:16:40.667 "name": "BaseBdev3", 00:16:40.667 "uuid": "0fdec842-a75f-5159-84e1-fea09217de0b", 00:16:40.667 "is_configured": true, 00:16:40.667 "data_offset": 0, 00:16:40.667 "data_size": 65536 00:16:40.667 }, 00:16:40.667 { 00:16:40.667 "name": "BaseBdev4", 00:16:40.667 "uuid": "353838e6-7549-5528-9a4a-1b9463c2d689", 00:16:40.667 "is_configured": true, 00:16:40.667 "data_offset": 0, 00:16:40.667 "data_size": 65536 00:16:40.667 } 00:16:40.667 ] 00:16:40.667 }' 00:16:40.667 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.926 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.926 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.926 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.926 11:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:41.865 11:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:41.865 11:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:41.865 11:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.865 11:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:41.865 11:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:41.865 11:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.865 11:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.865 11:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.865 11:01:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.865 11:01:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.865 11:01:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.865 11:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.865 "name": "raid_bdev1", 00:16:41.865 "uuid": "0a94035f-befa-4b80-a651-a9f1b55cc1cd", 00:16:41.865 "strip_size_kb": 64, 00:16:41.865 "state": "online", 00:16:41.865 "raid_level": "raid5f", 00:16:41.865 "superblock": false, 00:16:41.865 "num_base_bdevs": 4, 00:16:41.865 "num_base_bdevs_discovered": 4, 00:16:41.865 "num_base_bdevs_operational": 4, 00:16:41.865 "process": { 00:16:41.865 "type": "rebuild", 00:16:41.865 "target": "spare", 00:16:41.865 "progress": { 00:16:41.865 "blocks": 42240, 00:16:41.865 "percent": 21 00:16:41.865 } 00:16:41.865 }, 00:16:41.865 "base_bdevs_list": [ 00:16:41.865 { 00:16:41.865 "name": "spare", 00:16:41.865 "uuid": "3de0cc47-f589-5471-aa29-b85308bdc2da", 00:16:41.865 "is_configured": true, 00:16:41.865 "data_offset": 0, 00:16:41.865 "data_size": 65536 00:16:41.865 }, 00:16:41.865 { 00:16:41.865 "name": "BaseBdev2", 00:16:41.865 "uuid": "eddf3a15-a03a-5709-97ab-8f4061bf9e3d", 00:16:41.865 "is_configured": true, 00:16:41.865 "data_offset": 0, 00:16:41.865 "data_size": 65536 00:16:41.865 }, 00:16:41.865 { 00:16:41.865 "name": "BaseBdev3", 00:16:41.865 "uuid": "0fdec842-a75f-5159-84e1-fea09217de0b", 00:16:41.865 "is_configured": true, 00:16:41.865 "data_offset": 0, 00:16:41.865 "data_size": 65536 00:16:41.865 }, 00:16:41.865 { 00:16:41.865 "name": "BaseBdev4", 00:16:41.865 "uuid": "353838e6-7549-5528-9a4a-1b9463c2d689", 00:16:41.865 "is_configured": true, 00:16:41.865 "data_offset": 0, 00:16:41.865 "data_size": 65536 00:16:41.865 } 00:16:41.865 ] 00:16:41.865 }' 00:16:41.865 11:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.865 11:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:41.865 11:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.125 11:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.125 11:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:43.065 11:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:43.065 11:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:43.065 11:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.065 11:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:43.065 11:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:43.065 11:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.065 11:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.065 11:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.065 11:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.065 11:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.065 11:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.065 11:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.065 "name": "raid_bdev1", 00:16:43.065 "uuid": "0a94035f-befa-4b80-a651-a9f1b55cc1cd", 00:16:43.065 "strip_size_kb": 64, 00:16:43.065 "state": "online", 00:16:43.065 "raid_level": "raid5f", 00:16:43.065 "superblock": false, 00:16:43.065 "num_base_bdevs": 4, 00:16:43.065 "num_base_bdevs_discovered": 4, 00:16:43.065 "num_base_bdevs_operational": 4, 00:16:43.065 "process": { 00:16:43.065 "type": "rebuild", 00:16:43.065 "target": "spare", 00:16:43.065 "progress": { 00:16:43.065 "blocks": 65280, 00:16:43.065 "percent": 33 00:16:43.065 } 00:16:43.065 }, 00:16:43.065 "base_bdevs_list": [ 00:16:43.065 { 00:16:43.065 "name": "spare", 00:16:43.065 "uuid": "3de0cc47-f589-5471-aa29-b85308bdc2da", 00:16:43.065 "is_configured": true, 00:16:43.065 "data_offset": 0, 00:16:43.065 "data_size": 65536 00:16:43.065 }, 00:16:43.065 { 00:16:43.065 "name": "BaseBdev2", 00:16:43.065 "uuid": "eddf3a15-a03a-5709-97ab-8f4061bf9e3d", 00:16:43.065 "is_configured": true, 00:16:43.065 "data_offset": 0, 00:16:43.065 "data_size": 65536 00:16:43.065 }, 00:16:43.065 { 00:16:43.065 "name": "BaseBdev3", 00:16:43.065 "uuid": "0fdec842-a75f-5159-84e1-fea09217de0b", 00:16:43.065 "is_configured": true, 00:16:43.065 "data_offset": 0, 00:16:43.065 "data_size": 65536 00:16:43.065 }, 00:16:43.065 { 00:16:43.065 "name": "BaseBdev4", 00:16:43.065 "uuid": "353838e6-7549-5528-9a4a-1b9463c2d689", 00:16:43.065 "is_configured": true, 00:16:43.065 "data_offset": 0, 00:16:43.065 "data_size": 65536 00:16:43.065 } 00:16:43.065 ] 00:16:43.065 }' 00:16:43.065 11:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.065 11:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:43.065 11:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.065 11:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:43.065 11:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:44.477 11:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:44.477 11:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:44.477 11:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.477 11:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:44.477 11:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:44.477 11:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.477 11:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.477 11:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.477 11:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.477 11:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.477 11:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.477 11:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.477 "name": "raid_bdev1", 00:16:44.477 "uuid": "0a94035f-befa-4b80-a651-a9f1b55cc1cd", 00:16:44.477 "strip_size_kb": 64, 00:16:44.477 "state": "online", 00:16:44.477 "raid_level": "raid5f", 00:16:44.477 "superblock": false, 00:16:44.477 "num_base_bdevs": 4, 00:16:44.477 "num_base_bdevs_discovered": 4, 00:16:44.477 "num_base_bdevs_operational": 4, 00:16:44.477 "process": { 00:16:44.477 "type": "rebuild", 00:16:44.477 "target": "spare", 00:16:44.477 "progress": { 00:16:44.477 "blocks": 86400, 00:16:44.477 "percent": 43 00:16:44.477 } 00:16:44.477 }, 00:16:44.477 "base_bdevs_list": [ 00:16:44.477 { 00:16:44.477 "name": "spare", 00:16:44.477 "uuid": "3de0cc47-f589-5471-aa29-b85308bdc2da", 00:16:44.477 "is_configured": true, 00:16:44.477 "data_offset": 0, 00:16:44.477 "data_size": 65536 00:16:44.477 }, 00:16:44.477 { 00:16:44.477 "name": "BaseBdev2", 00:16:44.477 "uuid": "eddf3a15-a03a-5709-97ab-8f4061bf9e3d", 00:16:44.477 "is_configured": true, 00:16:44.477 "data_offset": 0, 00:16:44.477 "data_size": 65536 00:16:44.477 }, 00:16:44.477 { 00:16:44.477 "name": "BaseBdev3", 00:16:44.477 "uuid": "0fdec842-a75f-5159-84e1-fea09217de0b", 00:16:44.477 "is_configured": true, 00:16:44.477 "data_offset": 0, 00:16:44.477 "data_size": 65536 00:16:44.477 }, 00:16:44.477 { 00:16:44.477 "name": "BaseBdev4", 00:16:44.477 "uuid": "353838e6-7549-5528-9a4a-1b9463c2d689", 00:16:44.477 "is_configured": true, 00:16:44.477 "data_offset": 0, 00:16:44.477 "data_size": 65536 00:16:44.477 } 00:16:44.477 ] 00:16:44.477 }' 00:16:44.477 11:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.477 11:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:44.477 11:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.477 11:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:44.477 11:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:45.417 11:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:45.417 11:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.417 11:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.417 11:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.417 11:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.417 11:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.417 11:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.417 11:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.417 11:01:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.417 11:01:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.417 11:01:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.417 11:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.417 "name": "raid_bdev1", 00:16:45.417 "uuid": "0a94035f-befa-4b80-a651-a9f1b55cc1cd", 00:16:45.417 "strip_size_kb": 64, 00:16:45.417 "state": "online", 00:16:45.417 "raid_level": "raid5f", 00:16:45.417 "superblock": false, 00:16:45.417 "num_base_bdevs": 4, 00:16:45.417 "num_base_bdevs_discovered": 4, 00:16:45.417 "num_base_bdevs_operational": 4, 00:16:45.417 "process": { 00:16:45.417 "type": "rebuild", 00:16:45.417 "target": "spare", 00:16:45.417 "progress": { 00:16:45.417 "blocks": 109440, 00:16:45.417 "percent": 55 00:16:45.417 } 00:16:45.417 }, 00:16:45.417 "base_bdevs_list": [ 00:16:45.417 { 00:16:45.417 "name": "spare", 00:16:45.417 "uuid": "3de0cc47-f589-5471-aa29-b85308bdc2da", 00:16:45.417 "is_configured": true, 00:16:45.417 "data_offset": 0, 00:16:45.417 "data_size": 65536 00:16:45.417 }, 00:16:45.417 { 00:16:45.417 "name": "BaseBdev2", 00:16:45.417 "uuid": "eddf3a15-a03a-5709-97ab-8f4061bf9e3d", 00:16:45.417 "is_configured": true, 00:16:45.417 "data_offset": 0, 00:16:45.417 "data_size": 65536 00:16:45.417 }, 00:16:45.417 { 00:16:45.417 "name": "BaseBdev3", 00:16:45.417 "uuid": "0fdec842-a75f-5159-84e1-fea09217de0b", 00:16:45.417 "is_configured": true, 00:16:45.417 "data_offset": 0, 00:16:45.417 "data_size": 65536 00:16:45.417 }, 00:16:45.417 { 00:16:45.417 "name": "BaseBdev4", 00:16:45.417 "uuid": "353838e6-7549-5528-9a4a-1b9463c2d689", 00:16:45.417 "is_configured": true, 00:16:45.417 "data_offset": 0, 00:16:45.417 "data_size": 65536 00:16:45.417 } 00:16:45.417 ] 00:16:45.417 }' 00:16:45.417 11:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.417 11:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:45.417 11:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.417 11:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.417 11:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:46.797 11:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:46.797 11:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:46.797 11:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.797 11:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:46.797 11:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:46.797 11:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.797 11:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.797 11:01:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.797 11:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.797 11:01:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.797 11:01:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.797 11:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.797 "name": "raid_bdev1", 00:16:46.797 "uuid": "0a94035f-befa-4b80-a651-a9f1b55cc1cd", 00:16:46.797 "strip_size_kb": 64, 00:16:46.797 "state": "online", 00:16:46.797 "raid_level": "raid5f", 00:16:46.797 "superblock": false, 00:16:46.797 "num_base_bdevs": 4, 00:16:46.797 "num_base_bdevs_discovered": 4, 00:16:46.797 "num_base_bdevs_operational": 4, 00:16:46.797 "process": { 00:16:46.797 "type": "rebuild", 00:16:46.797 "target": "spare", 00:16:46.797 "progress": { 00:16:46.797 "blocks": 130560, 00:16:46.797 "percent": 66 00:16:46.797 } 00:16:46.797 }, 00:16:46.797 "base_bdevs_list": [ 00:16:46.797 { 00:16:46.797 "name": "spare", 00:16:46.797 "uuid": "3de0cc47-f589-5471-aa29-b85308bdc2da", 00:16:46.797 "is_configured": true, 00:16:46.797 "data_offset": 0, 00:16:46.797 "data_size": 65536 00:16:46.797 }, 00:16:46.797 { 00:16:46.797 "name": "BaseBdev2", 00:16:46.797 "uuid": "eddf3a15-a03a-5709-97ab-8f4061bf9e3d", 00:16:46.797 "is_configured": true, 00:16:46.797 "data_offset": 0, 00:16:46.797 "data_size": 65536 00:16:46.797 }, 00:16:46.797 { 00:16:46.797 "name": "BaseBdev3", 00:16:46.797 "uuid": "0fdec842-a75f-5159-84e1-fea09217de0b", 00:16:46.797 "is_configured": true, 00:16:46.797 "data_offset": 0, 00:16:46.797 "data_size": 65536 00:16:46.797 }, 00:16:46.797 { 00:16:46.798 "name": "BaseBdev4", 00:16:46.798 "uuid": "353838e6-7549-5528-9a4a-1b9463c2d689", 00:16:46.798 "is_configured": true, 00:16:46.798 "data_offset": 0, 00:16:46.798 "data_size": 65536 00:16:46.798 } 00:16:46.798 ] 00:16:46.798 }' 00:16:46.798 11:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.798 11:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:46.798 11:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.798 11:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:46.798 11:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:47.736 11:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:47.736 11:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:47.736 11:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.736 11:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:47.736 11:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:47.736 11:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.736 11:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.736 11:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.736 11:01:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.736 11:01:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.736 11:01:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.736 11:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.736 "name": "raid_bdev1", 00:16:47.736 "uuid": "0a94035f-befa-4b80-a651-a9f1b55cc1cd", 00:16:47.736 "strip_size_kb": 64, 00:16:47.736 "state": "online", 00:16:47.736 "raid_level": "raid5f", 00:16:47.736 "superblock": false, 00:16:47.736 "num_base_bdevs": 4, 00:16:47.736 "num_base_bdevs_discovered": 4, 00:16:47.736 "num_base_bdevs_operational": 4, 00:16:47.736 "process": { 00:16:47.736 "type": "rebuild", 00:16:47.736 "target": "spare", 00:16:47.736 "progress": { 00:16:47.736 "blocks": 151680, 00:16:47.736 "percent": 77 00:16:47.736 } 00:16:47.736 }, 00:16:47.737 "base_bdevs_list": [ 00:16:47.737 { 00:16:47.737 "name": "spare", 00:16:47.737 "uuid": "3de0cc47-f589-5471-aa29-b85308bdc2da", 00:16:47.737 "is_configured": true, 00:16:47.737 "data_offset": 0, 00:16:47.737 "data_size": 65536 00:16:47.737 }, 00:16:47.737 { 00:16:47.737 "name": "BaseBdev2", 00:16:47.737 "uuid": "eddf3a15-a03a-5709-97ab-8f4061bf9e3d", 00:16:47.737 "is_configured": true, 00:16:47.737 "data_offset": 0, 00:16:47.737 "data_size": 65536 00:16:47.737 }, 00:16:47.737 { 00:16:47.737 "name": "BaseBdev3", 00:16:47.737 "uuid": "0fdec842-a75f-5159-84e1-fea09217de0b", 00:16:47.737 "is_configured": true, 00:16:47.737 "data_offset": 0, 00:16:47.737 "data_size": 65536 00:16:47.737 }, 00:16:47.737 { 00:16:47.737 "name": "BaseBdev4", 00:16:47.737 "uuid": "353838e6-7549-5528-9a4a-1b9463c2d689", 00:16:47.737 "is_configured": true, 00:16:47.737 "data_offset": 0, 00:16:47.737 "data_size": 65536 00:16:47.737 } 00:16:47.737 ] 00:16:47.737 }' 00:16:47.737 11:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.737 11:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:47.737 11:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.737 11:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:47.737 11:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:48.672 11:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:48.672 11:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:48.672 11:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.672 11:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:48.672 11:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:48.672 11:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.672 11:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.672 11:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.672 11:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.672 11:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.672 11:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.932 11:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.932 "name": "raid_bdev1", 00:16:48.932 "uuid": "0a94035f-befa-4b80-a651-a9f1b55cc1cd", 00:16:48.932 "strip_size_kb": 64, 00:16:48.932 "state": "online", 00:16:48.932 "raid_level": "raid5f", 00:16:48.932 "superblock": false, 00:16:48.932 "num_base_bdevs": 4, 00:16:48.932 "num_base_bdevs_discovered": 4, 00:16:48.932 "num_base_bdevs_operational": 4, 00:16:48.932 "process": { 00:16:48.932 "type": "rebuild", 00:16:48.932 "target": "spare", 00:16:48.932 "progress": { 00:16:48.932 "blocks": 174720, 00:16:48.932 "percent": 88 00:16:48.932 } 00:16:48.932 }, 00:16:48.932 "base_bdevs_list": [ 00:16:48.932 { 00:16:48.932 "name": "spare", 00:16:48.932 "uuid": "3de0cc47-f589-5471-aa29-b85308bdc2da", 00:16:48.932 "is_configured": true, 00:16:48.932 "data_offset": 0, 00:16:48.932 "data_size": 65536 00:16:48.932 }, 00:16:48.932 { 00:16:48.932 "name": "BaseBdev2", 00:16:48.932 "uuid": "eddf3a15-a03a-5709-97ab-8f4061bf9e3d", 00:16:48.932 "is_configured": true, 00:16:48.932 "data_offset": 0, 00:16:48.932 "data_size": 65536 00:16:48.932 }, 00:16:48.932 { 00:16:48.932 "name": "BaseBdev3", 00:16:48.932 "uuid": "0fdec842-a75f-5159-84e1-fea09217de0b", 00:16:48.932 "is_configured": true, 00:16:48.932 "data_offset": 0, 00:16:48.932 "data_size": 65536 00:16:48.932 }, 00:16:48.932 { 00:16:48.932 "name": "BaseBdev4", 00:16:48.932 "uuid": "353838e6-7549-5528-9a4a-1b9463c2d689", 00:16:48.932 "is_configured": true, 00:16:48.932 "data_offset": 0, 00:16:48.932 "data_size": 65536 00:16:48.932 } 00:16:48.932 ] 00:16:48.932 }' 00:16:48.932 11:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.932 11:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:48.932 11:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.932 11:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:48.932 11:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:49.873 11:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:49.873 11:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:49.873 11:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.873 11:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:49.873 11:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:49.873 11:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.873 11:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.873 11:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.873 11:01:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.873 11:01:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.873 11:01:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.873 [2024-11-15 11:01:56.755229] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:49.873 [2024-11-15 11:01:56.755311] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:49.873 [2024-11-15 11:01:56.755357] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.873 11:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.873 "name": "raid_bdev1", 00:16:49.873 "uuid": "0a94035f-befa-4b80-a651-a9f1b55cc1cd", 00:16:49.873 "strip_size_kb": 64, 00:16:49.873 "state": "online", 00:16:49.873 "raid_level": "raid5f", 00:16:49.873 "superblock": false, 00:16:49.873 "num_base_bdevs": 4, 00:16:49.873 "num_base_bdevs_discovered": 4, 00:16:49.873 "num_base_bdevs_operational": 4, 00:16:49.873 "process": { 00:16:49.873 "type": "rebuild", 00:16:49.873 "target": "spare", 00:16:49.873 "progress": { 00:16:49.873 "blocks": 195840, 00:16:49.873 "percent": 99 00:16:49.873 } 00:16:49.873 }, 00:16:49.873 "base_bdevs_list": [ 00:16:49.873 { 00:16:49.873 "name": "spare", 00:16:49.873 "uuid": "3de0cc47-f589-5471-aa29-b85308bdc2da", 00:16:49.873 "is_configured": true, 00:16:49.873 "data_offset": 0, 00:16:49.873 "data_size": 65536 00:16:49.873 }, 00:16:49.873 { 00:16:49.873 "name": "BaseBdev2", 00:16:49.873 "uuid": "eddf3a15-a03a-5709-97ab-8f4061bf9e3d", 00:16:49.873 "is_configured": true, 00:16:49.873 "data_offset": 0, 00:16:49.873 "data_size": 65536 00:16:49.873 }, 00:16:49.873 { 00:16:49.873 "name": "BaseBdev3", 00:16:49.873 "uuid": "0fdec842-a75f-5159-84e1-fea09217de0b", 00:16:49.873 "is_configured": true, 00:16:49.873 "data_offset": 0, 00:16:49.873 "data_size": 65536 00:16:49.873 }, 00:16:49.873 { 00:16:49.873 "name": "BaseBdev4", 00:16:49.873 "uuid": "353838e6-7549-5528-9a4a-1b9463c2d689", 00:16:49.873 "is_configured": true, 00:16:49.873 "data_offset": 0, 00:16:49.873 "data_size": 65536 00:16:49.873 } 00:16:49.873 ] 00:16:49.873 }' 00:16:49.873 11:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.135 11:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.135 11:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.135 11:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.135 11:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:51.072 11:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:51.072 11:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:51.072 11:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.072 11:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:51.072 11:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:51.072 11:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.072 11:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.072 11:01:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.072 11:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.072 11:01:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.072 11:01:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.072 11:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.072 "name": "raid_bdev1", 00:16:51.072 "uuid": "0a94035f-befa-4b80-a651-a9f1b55cc1cd", 00:16:51.072 "strip_size_kb": 64, 00:16:51.072 "state": "online", 00:16:51.072 "raid_level": "raid5f", 00:16:51.072 "superblock": false, 00:16:51.072 "num_base_bdevs": 4, 00:16:51.072 "num_base_bdevs_discovered": 4, 00:16:51.072 "num_base_bdevs_operational": 4, 00:16:51.072 "base_bdevs_list": [ 00:16:51.072 { 00:16:51.072 "name": "spare", 00:16:51.072 "uuid": "3de0cc47-f589-5471-aa29-b85308bdc2da", 00:16:51.072 "is_configured": true, 00:16:51.072 "data_offset": 0, 00:16:51.072 "data_size": 65536 00:16:51.072 }, 00:16:51.072 { 00:16:51.072 "name": "BaseBdev2", 00:16:51.072 "uuid": "eddf3a15-a03a-5709-97ab-8f4061bf9e3d", 00:16:51.072 "is_configured": true, 00:16:51.072 "data_offset": 0, 00:16:51.072 "data_size": 65536 00:16:51.072 }, 00:16:51.072 { 00:16:51.072 "name": "BaseBdev3", 00:16:51.072 "uuid": "0fdec842-a75f-5159-84e1-fea09217de0b", 00:16:51.072 "is_configured": true, 00:16:51.072 "data_offset": 0, 00:16:51.072 "data_size": 65536 00:16:51.072 }, 00:16:51.072 { 00:16:51.072 "name": "BaseBdev4", 00:16:51.073 "uuid": "353838e6-7549-5528-9a4a-1b9463c2d689", 00:16:51.073 "is_configured": true, 00:16:51.073 "data_offset": 0, 00:16:51.073 "data_size": 65536 00:16:51.073 } 00:16:51.073 ] 00:16:51.073 }' 00:16:51.073 11:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.073 11:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:51.073 11:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.331 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:51.331 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:51.331 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:51.331 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.331 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:51.331 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:51.331 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.331 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.331 11:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.331 11:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.331 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.331 11:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.331 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.331 "name": "raid_bdev1", 00:16:51.331 "uuid": "0a94035f-befa-4b80-a651-a9f1b55cc1cd", 00:16:51.331 "strip_size_kb": 64, 00:16:51.331 "state": "online", 00:16:51.331 "raid_level": "raid5f", 00:16:51.331 "superblock": false, 00:16:51.331 "num_base_bdevs": 4, 00:16:51.331 "num_base_bdevs_discovered": 4, 00:16:51.332 "num_base_bdevs_operational": 4, 00:16:51.332 "base_bdevs_list": [ 00:16:51.332 { 00:16:51.332 "name": "spare", 00:16:51.332 "uuid": "3de0cc47-f589-5471-aa29-b85308bdc2da", 00:16:51.332 "is_configured": true, 00:16:51.332 "data_offset": 0, 00:16:51.332 "data_size": 65536 00:16:51.332 }, 00:16:51.332 { 00:16:51.332 "name": "BaseBdev2", 00:16:51.332 "uuid": "eddf3a15-a03a-5709-97ab-8f4061bf9e3d", 00:16:51.332 "is_configured": true, 00:16:51.332 "data_offset": 0, 00:16:51.332 "data_size": 65536 00:16:51.332 }, 00:16:51.332 { 00:16:51.332 "name": "BaseBdev3", 00:16:51.332 "uuid": "0fdec842-a75f-5159-84e1-fea09217de0b", 00:16:51.332 "is_configured": true, 00:16:51.332 "data_offset": 0, 00:16:51.332 "data_size": 65536 00:16:51.332 }, 00:16:51.332 { 00:16:51.332 "name": "BaseBdev4", 00:16:51.332 "uuid": "353838e6-7549-5528-9a4a-1b9463c2d689", 00:16:51.332 "is_configured": true, 00:16:51.332 "data_offset": 0, 00:16:51.332 "data_size": 65536 00:16:51.332 } 00:16:51.332 ] 00:16:51.332 }' 00:16:51.332 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.332 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:51.332 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.332 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:51.332 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:51.332 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.332 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.332 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.332 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.332 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:51.332 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.332 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.332 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.332 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.332 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.332 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.332 11:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.332 11:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.332 11:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.332 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.332 "name": "raid_bdev1", 00:16:51.332 "uuid": "0a94035f-befa-4b80-a651-a9f1b55cc1cd", 00:16:51.332 "strip_size_kb": 64, 00:16:51.332 "state": "online", 00:16:51.332 "raid_level": "raid5f", 00:16:51.332 "superblock": false, 00:16:51.332 "num_base_bdevs": 4, 00:16:51.332 "num_base_bdevs_discovered": 4, 00:16:51.332 "num_base_bdevs_operational": 4, 00:16:51.332 "base_bdevs_list": [ 00:16:51.332 { 00:16:51.332 "name": "spare", 00:16:51.332 "uuid": "3de0cc47-f589-5471-aa29-b85308bdc2da", 00:16:51.332 "is_configured": true, 00:16:51.332 "data_offset": 0, 00:16:51.332 "data_size": 65536 00:16:51.332 }, 00:16:51.332 { 00:16:51.332 "name": "BaseBdev2", 00:16:51.332 "uuid": "eddf3a15-a03a-5709-97ab-8f4061bf9e3d", 00:16:51.332 "is_configured": true, 00:16:51.332 "data_offset": 0, 00:16:51.332 "data_size": 65536 00:16:51.332 }, 00:16:51.332 { 00:16:51.332 "name": "BaseBdev3", 00:16:51.332 "uuid": "0fdec842-a75f-5159-84e1-fea09217de0b", 00:16:51.332 "is_configured": true, 00:16:51.332 "data_offset": 0, 00:16:51.332 "data_size": 65536 00:16:51.332 }, 00:16:51.332 { 00:16:51.332 "name": "BaseBdev4", 00:16:51.332 "uuid": "353838e6-7549-5528-9a4a-1b9463c2d689", 00:16:51.332 "is_configured": true, 00:16:51.332 "data_offset": 0, 00:16:51.332 "data_size": 65536 00:16:51.332 } 00:16:51.332 ] 00:16:51.332 }' 00:16:51.332 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.332 11:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.901 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:51.901 11:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.901 11:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.901 [2024-11-15 11:01:58.645031] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:51.901 [2024-11-15 11:01:58.645070] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:51.901 [2024-11-15 11:01:58.645173] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.901 [2024-11-15 11:01:58.645275] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:51.901 [2024-11-15 11:01:58.645292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:51.901 11:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.901 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.901 11:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.901 11:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.901 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:51.901 11:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.901 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:51.901 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:51.901 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:51.901 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:51.901 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:51.901 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:51.901 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:51.901 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:51.901 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:51.901 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:51.901 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:51.901 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:51.901 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:52.160 /dev/nbd0 00:16:52.160 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:52.160 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:52.160 11:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:52.160 11:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:16:52.160 11:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:52.160 11:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:52.160 11:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:52.160 11:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:16:52.160 11:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:52.160 11:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:52.160 11:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:52.160 1+0 records in 00:16:52.160 1+0 records out 00:16:52.160 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031361 s, 13.1 MB/s 00:16:52.160 11:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.160 11:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:16:52.160 11:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.160 11:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:52.160 11:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:16:52.160 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:52.160 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:52.160 11:01:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:52.420 /dev/nbd1 00:16:52.420 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:52.420 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:52.420 11:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:16:52.420 11:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:16:52.420 11:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:52.420 11:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:52.420 11:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:16:52.420 11:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:16:52.420 11:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:52.420 11:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:52.420 11:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:52.420 1+0 records in 00:16:52.420 1+0 records out 00:16:52.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315875 s, 13.0 MB/s 00:16:52.420 11:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.420 11:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:16:52.420 11:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.420 11:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:52.420 11:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:16:52.420 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:52.420 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:52.420 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:52.680 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:52.680 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:52.680 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:52.680 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:52.680 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:52.680 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:52.680 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:52.939 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:52.939 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:52.939 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:52.939 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:52.939 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:52.939 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:52.939 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:52.939 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:52.939 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:52.939 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:53.199 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:53.199 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:53.199 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:53.199 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:53.199 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:53.199 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:53.199 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:53.199 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:53.199 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:53.199 11:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84771 00:16:53.199 11:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 84771 ']' 00:16:53.199 11:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 84771 00:16:53.199 11:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:16:53.199 11:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:53.199 11:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84771 00:16:53.199 11:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:53.199 11:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:53.199 killing process with pid 84771 00:16:53.199 11:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84771' 00:16:53.199 11:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 84771 00:16:53.199 Received shutdown signal, test time was about 60.000000 seconds 00:16:53.199 00:16:53.199 Latency(us) 00:16:53.199 [2024-11-15T11:02:00.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.199 [2024-11-15T11:02:00.127Z] =================================================================================================================== 00:16:53.199 [2024-11-15T11:02:00.127Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:53.199 [2024-11-15 11:01:59.924205] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:53.199 11:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 84771 00:16:53.775 [2024-11-15 11:02:00.440874] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:55.153 11:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:55.153 00:16:55.153 real 0m20.516s 00:16:55.153 user 0m24.551s 00:16:55.153 sys 0m2.387s 00:16:55.153 11:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:55.153 11:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.153 ************************************ 00:16:55.153 END TEST raid5f_rebuild_test 00:16:55.153 ************************************ 00:16:55.153 11:02:01 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:55.153 11:02:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:16:55.153 11:02:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:55.153 11:02:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:55.153 ************************************ 00:16:55.153 START TEST raid5f_rebuild_test_sb 00:16:55.153 ************************************ 00:16:55.153 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 true false true 00:16:55.153 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:55.153 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:55.153 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:55.153 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:55.153 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:55.153 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:55.153 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:55.153 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:55.153 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:55.153 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:55.153 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:55.153 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:55.154 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:55.154 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:55.154 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:55.154 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:55.154 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:55.154 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:55.154 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:55.154 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:55.154 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:55.154 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:55.154 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:55.154 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:55.154 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:55.154 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:55.154 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:55.154 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:55.154 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:55.154 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:55.154 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:55.154 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:55.154 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85293 00:16:55.154 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:55.154 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85293 00:16:55.154 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 85293 ']' 00:16:55.154 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.154 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:55.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.154 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.154 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:55.154 11:02:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.154 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:55.154 Zero copy mechanism will not be used. 00:16:55.154 [2024-11-15 11:02:01.836192] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:16:55.154 [2024-11-15 11:02:01.836352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85293 ] 00:16:55.154 [2024-11-15 11:02:02.013502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.413 [2024-11-15 11:02:02.145718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.672 [2024-11-15 11:02:02.364764] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:55.672 [2024-11-15 11:02:02.364847] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:55.932 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:55.932 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:16:55.932 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:55.932 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:55.932 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.932 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.932 BaseBdev1_malloc 00:16:55.932 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.932 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:55.932 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.932 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.932 [2024-11-15 11:02:02.764660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:55.932 [2024-11-15 11:02:02.764736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.932 [2024-11-15 11:02:02.764761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:55.932 [2024-11-15 11:02:02.764773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.932 [2024-11-15 11:02:02.766999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.932 [2024-11-15 11:02:02.767042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:55.932 BaseBdev1 00:16:55.933 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.933 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:55.933 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:55.933 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.933 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.933 BaseBdev2_malloc 00:16:55.933 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.933 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:55.933 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.933 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.933 [2024-11-15 11:02:02.821178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:55.933 [2024-11-15 11:02:02.821241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.933 [2024-11-15 11:02:02.821261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:55.933 [2024-11-15 11:02:02.821275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.933 [2024-11-15 11:02:02.823474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.933 [2024-11-15 11:02:02.823514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:55.933 BaseBdev2 00:16:55.933 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.933 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:55.933 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:55.933 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.933 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.193 BaseBdev3_malloc 00:16:56.193 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.193 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:56.193 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.193 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.193 [2024-11-15 11:02:02.888570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:56.193 [2024-11-15 11:02:02.888627] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.193 [2024-11-15 11:02:02.888650] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:56.193 [2024-11-15 11:02:02.888661] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.193 [2024-11-15 11:02:02.890858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.193 [2024-11-15 11:02:02.890902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:56.193 BaseBdev3 00:16:56.193 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.193 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:56.193 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:56.193 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.193 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.193 BaseBdev4_malloc 00:16:56.193 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.193 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:56.193 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.194 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.194 [2024-11-15 11:02:02.947775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:56.194 [2024-11-15 11:02:02.947854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.194 [2024-11-15 11:02:02.947875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:56.194 [2024-11-15 11:02:02.947888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.194 [2024-11-15 11:02:02.950118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.194 [2024-11-15 11:02:02.950162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:56.194 BaseBdev4 00:16:56.194 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.194 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:56.194 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.194 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.194 spare_malloc 00:16:56.194 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.194 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:56.194 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.194 11:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.194 spare_delay 00:16:56.194 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.194 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:56.194 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.194 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.194 [2024-11-15 11:02:03.012896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:56.194 [2024-11-15 11:02:03.013018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.194 [2024-11-15 11:02:03.013046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:56.194 [2024-11-15 11:02:03.013058] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.194 [2024-11-15 11:02:03.015383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.194 [2024-11-15 11:02:03.015419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:56.194 spare 00:16:56.194 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.194 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:56.194 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.194 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.194 [2024-11-15 11:02:03.020936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:56.194 [2024-11-15 11:02:03.022906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:56.194 [2024-11-15 11:02:03.022969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:56.194 [2024-11-15 11:02:03.023021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:56.194 [2024-11-15 11:02:03.023208] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:56.194 [2024-11-15 11:02:03.023232] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:56.194 [2024-11-15 11:02:03.023513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:56.194 [2024-11-15 11:02:03.031324] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:56.194 [2024-11-15 11:02:03.031342] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:56.194 [2024-11-15 11:02:03.031549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.194 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.194 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:56.194 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:56.194 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.194 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.194 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.194 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:56.194 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.194 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.194 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.194 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.194 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.194 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.194 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.194 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.194 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.194 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.194 "name": "raid_bdev1", 00:16:56.194 "uuid": "91bedc82-88bd-4651-a81b-e953cf8615e4", 00:16:56.194 "strip_size_kb": 64, 00:16:56.194 "state": "online", 00:16:56.194 "raid_level": "raid5f", 00:16:56.194 "superblock": true, 00:16:56.194 "num_base_bdevs": 4, 00:16:56.194 "num_base_bdevs_discovered": 4, 00:16:56.194 "num_base_bdevs_operational": 4, 00:16:56.194 "base_bdevs_list": [ 00:16:56.194 { 00:16:56.194 "name": "BaseBdev1", 00:16:56.194 "uuid": "a421908f-1111-5840-b548-8e84a33a72e0", 00:16:56.194 "is_configured": true, 00:16:56.194 "data_offset": 2048, 00:16:56.194 "data_size": 63488 00:16:56.194 }, 00:16:56.194 { 00:16:56.194 "name": "BaseBdev2", 00:16:56.194 "uuid": "9e8c19dc-fe08-59bc-8eb2-33d8a90b7ab3", 00:16:56.194 "is_configured": true, 00:16:56.194 "data_offset": 2048, 00:16:56.194 "data_size": 63488 00:16:56.194 }, 00:16:56.194 { 00:16:56.194 "name": "BaseBdev3", 00:16:56.194 "uuid": "7bf7c6fb-b7f6-5845-b499-ad5b706439c1", 00:16:56.194 "is_configured": true, 00:16:56.194 "data_offset": 2048, 00:16:56.194 "data_size": 63488 00:16:56.194 }, 00:16:56.194 { 00:16:56.194 "name": "BaseBdev4", 00:16:56.194 "uuid": "af4c7f78-51f4-5c4d-9579-61dd73074358", 00:16:56.194 "is_configured": true, 00:16:56.194 "data_offset": 2048, 00:16:56.194 "data_size": 63488 00:16:56.194 } 00:16:56.194 ] 00:16:56.194 }' 00:16:56.194 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.194 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.765 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:56.765 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:56.765 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.765 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.765 [2024-11-15 11:02:03.503872] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:56.765 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.765 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:56.765 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:56.765 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.765 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.765 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.765 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.765 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:56.765 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:56.765 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:56.765 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:56.765 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:56.765 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:56.765 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:56.765 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:56.765 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:56.765 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:56.765 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:56.765 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:56.765 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:56.765 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:57.025 [2024-11-15 11:02:03.783237] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:57.025 /dev/nbd0 00:16:57.025 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:57.025 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:57.025 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:57.025 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:16:57.025 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:57.025 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:57.025 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:57.025 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:16:57.025 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:57.025 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:57.025 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:57.025 1+0 records in 00:16:57.025 1+0 records out 00:16:57.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219781 s, 18.6 MB/s 00:16:57.025 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:57.025 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:16:57.025 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:57.025 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:57.025 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:16:57.025 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:57.025 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:57.025 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:57.025 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:57.025 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:57.025 11:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:16:57.601 496+0 records in 00:16:57.601 496+0 records out 00:16:57.601 97517568 bytes (98 MB, 93 MiB) copied, 0.503813 s, 194 MB/s 00:16:57.601 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:57.601 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:57.601 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:57.601 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:57.601 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:57.601 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:57.601 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:57.862 [2024-11-15 11:02:04.599064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.862 [2024-11-15 11:02:04.622519] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.862 "name": "raid_bdev1", 00:16:57.862 "uuid": "91bedc82-88bd-4651-a81b-e953cf8615e4", 00:16:57.862 "strip_size_kb": 64, 00:16:57.862 "state": "online", 00:16:57.862 "raid_level": "raid5f", 00:16:57.862 "superblock": true, 00:16:57.862 "num_base_bdevs": 4, 00:16:57.862 "num_base_bdevs_discovered": 3, 00:16:57.862 "num_base_bdevs_operational": 3, 00:16:57.862 "base_bdevs_list": [ 00:16:57.862 { 00:16:57.862 "name": null, 00:16:57.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.862 "is_configured": false, 00:16:57.862 "data_offset": 0, 00:16:57.862 "data_size": 63488 00:16:57.862 }, 00:16:57.862 { 00:16:57.862 "name": "BaseBdev2", 00:16:57.862 "uuid": "9e8c19dc-fe08-59bc-8eb2-33d8a90b7ab3", 00:16:57.862 "is_configured": true, 00:16:57.862 "data_offset": 2048, 00:16:57.862 "data_size": 63488 00:16:57.862 }, 00:16:57.862 { 00:16:57.862 "name": "BaseBdev3", 00:16:57.862 "uuid": "7bf7c6fb-b7f6-5845-b499-ad5b706439c1", 00:16:57.862 "is_configured": true, 00:16:57.862 "data_offset": 2048, 00:16:57.862 "data_size": 63488 00:16:57.862 }, 00:16:57.862 { 00:16:57.862 "name": "BaseBdev4", 00:16:57.862 "uuid": "af4c7f78-51f4-5c4d-9579-61dd73074358", 00:16:57.862 "is_configured": true, 00:16:57.862 "data_offset": 2048, 00:16:57.862 "data_size": 63488 00:16:57.862 } 00:16:57.862 ] 00:16:57.862 }' 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.862 11:02:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.433 11:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:58.433 11:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.433 11:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.433 [2024-11-15 11:02:05.125695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:58.433 [2024-11-15 11:02:05.145027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:16:58.433 11:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.433 11:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:58.433 [2024-11-15 11:02:05.156474] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:59.373 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.373 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.373 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.373 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.373 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.373 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.373 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.373 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.373 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.374 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.374 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.374 "name": "raid_bdev1", 00:16:59.374 "uuid": "91bedc82-88bd-4651-a81b-e953cf8615e4", 00:16:59.374 "strip_size_kb": 64, 00:16:59.374 "state": "online", 00:16:59.374 "raid_level": "raid5f", 00:16:59.374 "superblock": true, 00:16:59.374 "num_base_bdevs": 4, 00:16:59.374 "num_base_bdevs_discovered": 4, 00:16:59.374 "num_base_bdevs_operational": 4, 00:16:59.374 "process": { 00:16:59.374 "type": "rebuild", 00:16:59.374 "target": "spare", 00:16:59.374 "progress": { 00:16:59.374 "blocks": 19200, 00:16:59.374 "percent": 10 00:16:59.374 } 00:16:59.374 }, 00:16:59.374 "base_bdevs_list": [ 00:16:59.374 { 00:16:59.374 "name": "spare", 00:16:59.374 "uuid": "c0c50d01-8172-56b9-9ad0-c496e3608db1", 00:16:59.374 "is_configured": true, 00:16:59.374 "data_offset": 2048, 00:16:59.374 "data_size": 63488 00:16:59.374 }, 00:16:59.374 { 00:16:59.374 "name": "BaseBdev2", 00:16:59.374 "uuid": "9e8c19dc-fe08-59bc-8eb2-33d8a90b7ab3", 00:16:59.374 "is_configured": true, 00:16:59.374 "data_offset": 2048, 00:16:59.374 "data_size": 63488 00:16:59.374 }, 00:16:59.374 { 00:16:59.374 "name": "BaseBdev3", 00:16:59.374 "uuid": "7bf7c6fb-b7f6-5845-b499-ad5b706439c1", 00:16:59.374 "is_configured": true, 00:16:59.374 "data_offset": 2048, 00:16:59.374 "data_size": 63488 00:16:59.374 }, 00:16:59.374 { 00:16:59.374 "name": "BaseBdev4", 00:16:59.374 "uuid": "af4c7f78-51f4-5c4d-9579-61dd73074358", 00:16:59.374 "is_configured": true, 00:16:59.374 "data_offset": 2048, 00:16:59.374 "data_size": 63488 00:16:59.374 } 00:16:59.374 ] 00:16:59.374 }' 00:16:59.374 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.374 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.374 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.635 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.635 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:59.635 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.635 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.635 [2024-11-15 11:02:06.308555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:59.635 [2024-11-15 11:02:06.365749] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:59.635 [2024-11-15 11:02:06.365839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.635 [2024-11-15 11:02:06.365861] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:59.635 [2024-11-15 11:02:06.365873] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:59.635 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.635 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:59.635 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.635 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.635 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.635 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.635 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:59.635 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.635 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.635 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.635 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.635 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.635 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.635 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.635 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.635 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.635 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.635 "name": "raid_bdev1", 00:16:59.635 "uuid": "91bedc82-88bd-4651-a81b-e953cf8615e4", 00:16:59.635 "strip_size_kb": 64, 00:16:59.635 "state": "online", 00:16:59.635 "raid_level": "raid5f", 00:16:59.635 "superblock": true, 00:16:59.635 "num_base_bdevs": 4, 00:16:59.635 "num_base_bdevs_discovered": 3, 00:16:59.635 "num_base_bdevs_operational": 3, 00:16:59.635 "base_bdevs_list": [ 00:16:59.635 { 00:16:59.635 "name": null, 00:16:59.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.635 "is_configured": false, 00:16:59.635 "data_offset": 0, 00:16:59.635 "data_size": 63488 00:16:59.635 }, 00:16:59.635 { 00:16:59.635 "name": "BaseBdev2", 00:16:59.635 "uuid": "9e8c19dc-fe08-59bc-8eb2-33d8a90b7ab3", 00:16:59.635 "is_configured": true, 00:16:59.635 "data_offset": 2048, 00:16:59.635 "data_size": 63488 00:16:59.635 }, 00:16:59.635 { 00:16:59.635 "name": "BaseBdev3", 00:16:59.635 "uuid": "7bf7c6fb-b7f6-5845-b499-ad5b706439c1", 00:16:59.635 "is_configured": true, 00:16:59.635 "data_offset": 2048, 00:16:59.635 "data_size": 63488 00:16:59.635 }, 00:16:59.635 { 00:16:59.635 "name": "BaseBdev4", 00:16:59.635 "uuid": "af4c7f78-51f4-5c4d-9579-61dd73074358", 00:16:59.635 "is_configured": true, 00:16:59.635 "data_offset": 2048, 00:16:59.635 "data_size": 63488 00:16:59.635 } 00:16:59.635 ] 00:16:59.635 }' 00:16:59.635 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.635 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.205 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:00.205 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.205 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:00.205 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:00.205 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.205 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.205 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.205 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.205 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.205 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.205 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.205 "name": "raid_bdev1", 00:17:00.205 "uuid": "91bedc82-88bd-4651-a81b-e953cf8615e4", 00:17:00.205 "strip_size_kb": 64, 00:17:00.205 "state": "online", 00:17:00.205 "raid_level": "raid5f", 00:17:00.205 "superblock": true, 00:17:00.205 "num_base_bdevs": 4, 00:17:00.205 "num_base_bdevs_discovered": 3, 00:17:00.205 "num_base_bdevs_operational": 3, 00:17:00.205 "base_bdevs_list": [ 00:17:00.205 { 00:17:00.205 "name": null, 00:17:00.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.205 "is_configured": false, 00:17:00.205 "data_offset": 0, 00:17:00.205 "data_size": 63488 00:17:00.205 }, 00:17:00.205 { 00:17:00.205 "name": "BaseBdev2", 00:17:00.205 "uuid": "9e8c19dc-fe08-59bc-8eb2-33d8a90b7ab3", 00:17:00.205 "is_configured": true, 00:17:00.205 "data_offset": 2048, 00:17:00.205 "data_size": 63488 00:17:00.205 }, 00:17:00.205 { 00:17:00.205 "name": "BaseBdev3", 00:17:00.205 "uuid": "7bf7c6fb-b7f6-5845-b499-ad5b706439c1", 00:17:00.205 "is_configured": true, 00:17:00.205 "data_offset": 2048, 00:17:00.205 "data_size": 63488 00:17:00.205 }, 00:17:00.205 { 00:17:00.205 "name": "BaseBdev4", 00:17:00.205 "uuid": "af4c7f78-51f4-5c4d-9579-61dd73074358", 00:17:00.205 "is_configured": true, 00:17:00.205 "data_offset": 2048, 00:17:00.205 "data_size": 63488 00:17:00.205 } 00:17:00.205 ] 00:17:00.205 }' 00:17:00.205 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.205 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:00.205 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.205 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:00.205 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:00.205 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.205 11:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.205 [2024-11-15 11:02:06.999013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:00.205 [2024-11-15 11:02:07.017338] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:17:00.205 11:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.205 11:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:00.205 [2024-11-15 11:02:07.028018] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:01.146 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.146 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.146 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.146 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.146 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.146 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.146 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.146 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.146 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.146 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.406 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.406 "name": "raid_bdev1", 00:17:01.406 "uuid": "91bedc82-88bd-4651-a81b-e953cf8615e4", 00:17:01.406 "strip_size_kb": 64, 00:17:01.406 "state": "online", 00:17:01.406 "raid_level": "raid5f", 00:17:01.406 "superblock": true, 00:17:01.406 "num_base_bdevs": 4, 00:17:01.406 "num_base_bdevs_discovered": 4, 00:17:01.406 "num_base_bdevs_operational": 4, 00:17:01.406 "process": { 00:17:01.406 "type": "rebuild", 00:17:01.406 "target": "spare", 00:17:01.406 "progress": { 00:17:01.406 "blocks": 17280, 00:17:01.406 "percent": 9 00:17:01.407 } 00:17:01.407 }, 00:17:01.407 "base_bdevs_list": [ 00:17:01.407 { 00:17:01.407 "name": "spare", 00:17:01.407 "uuid": "c0c50d01-8172-56b9-9ad0-c496e3608db1", 00:17:01.407 "is_configured": true, 00:17:01.407 "data_offset": 2048, 00:17:01.407 "data_size": 63488 00:17:01.407 }, 00:17:01.407 { 00:17:01.407 "name": "BaseBdev2", 00:17:01.407 "uuid": "9e8c19dc-fe08-59bc-8eb2-33d8a90b7ab3", 00:17:01.407 "is_configured": true, 00:17:01.407 "data_offset": 2048, 00:17:01.407 "data_size": 63488 00:17:01.407 }, 00:17:01.407 { 00:17:01.407 "name": "BaseBdev3", 00:17:01.407 "uuid": "7bf7c6fb-b7f6-5845-b499-ad5b706439c1", 00:17:01.407 "is_configured": true, 00:17:01.407 "data_offset": 2048, 00:17:01.407 "data_size": 63488 00:17:01.407 }, 00:17:01.407 { 00:17:01.407 "name": "BaseBdev4", 00:17:01.407 "uuid": "af4c7f78-51f4-5c4d-9579-61dd73074358", 00:17:01.407 "is_configured": true, 00:17:01.407 "data_offset": 2048, 00:17:01.407 "data_size": 63488 00:17:01.407 } 00:17:01.407 ] 00:17:01.407 }' 00:17:01.407 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.407 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:01.407 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.407 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:01.407 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:01.407 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:01.407 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:01.407 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:01.407 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:01.407 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=647 00:17:01.407 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:01.407 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.407 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.407 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.407 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.407 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.407 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.407 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.407 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.407 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.407 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.407 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.407 "name": "raid_bdev1", 00:17:01.407 "uuid": "91bedc82-88bd-4651-a81b-e953cf8615e4", 00:17:01.407 "strip_size_kb": 64, 00:17:01.407 "state": "online", 00:17:01.407 "raid_level": "raid5f", 00:17:01.407 "superblock": true, 00:17:01.407 "num_base_bdevs": 4, 00:17:01.407 "num_base_bdevs_discovered": 4, 00:17:01.407 "num_base_bdevs_operational": 4, 00:17:01.407 "process": { 00:17:01.407 "type": "rebuild", 00:17:01.407 "target": "spare", 00:17:01.407 "progress": { 00:17:01.407 "blocks": 21120, 00:17:01.407 "percent": 11 00:17:01.407 } 00:17:01.407 }, 00:17:01.407 "base_bdevs_list": [ 00:17:01.407 { 00:17:01.407 "name": "spare", 00:17:01.407 "uuid": "c0c50d01-8172-56b9-9ad0-c496e3608db1", 00:17:01.407 "is_configured": true, 00:17:01.407 "data_offset": 2048, 00:17:01.407 "data_size": 63488 00:17:01.407 }, 00:17:01.407 { 00:17:01.407 "name": "BaseBdev2", 00:17:01.407 "uuid": "9e8c19dc-fe08-59bc-8eb2-33d8a90b7ab3", 00:17:01.407 "is_configured": true, 00:17:01.407 "data_offset": 2048, 00:17:01.407 "data_size": 63488 00:17:01.407 }, 00:17:01.407 { 00:17:01.407 "name": "BaseBdev3", 00:17:01.407 "uuid": "7bf7c6fb-b7f6-5845-b499-ad5b706439c1", 00:17:01.407 "is_configured": true, 00:17:01.407 "data_offset": 2048, 00:17:01.407 "data_size": 63488 00:17:01.407 }, 00:17:01.407 { 00:17:01.407 "name": "BaseBdev4", 00:17:01.407 "uuid": "af4c7f78-51f4-5c4d-9579-61dd73074358", 00:17:01.407 "is_configured": true, 00:17:01.407 "data_offset": 2048, 00:17:01.407 "data_size": 63488 00:17:01.407 } 00:17:01.407 ] 00:17:01.407 }' 00:17:01.407 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.407 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:01.407 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.407 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:01.407 11:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:02.789 11:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:02.789 11:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:02.789 11:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.789 11:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:02.789 11:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:02.789 11:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.789 11:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.790 11:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.790 11:02:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.790 11:02:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.790 11:02:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.790 11:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.790 "name": "raid_bdev1", 00:17:02.790 "uuid": "91bedc82-88bd-4651-a81b-e953cf8615e4", 00:17:02.790 "strip_size_kb": 64, 00:17:02.790 "state": "online", 00:17:02.790 "raid_level": "raid5f", 00:17:02.790 "superblock": true, 00:17:02.790 "num_base_bdevs": 4, 00:17:02.790 "num_base_bdevs_discovered": 4, 00:17:02.790 "num_base_bdevs_operational": 4, 00:17:02.790 "process": { 00:17:02.790 "type": "rebuild", 00:17:02.790 "target": "spare", 00:17:02.790 "progress": { 00:17:02.790 "blocks": 44160, 00:17:02.790 "percent": 23 00:17:02.790 } 00:17:02.790 }, 00:17:02.790 "base_bdevs_list": [ 00:17:02.790 { 00:17:02.790 "name": "spare", 00:17:02.790 "uuid": "c0c50d01-8172-56b9-9ad0-c496e3608db1", 00:17:02.790 "is_configured": true, 00:17:02.790 "data_offset": 2048, 00:17:02.790 "data_size": 63488 00:17:02.790 }, 00:17:02.790 { 00:17:02.790 "name": "BaseBdev2", 00:17:02.790 "uuid": "9e8c19dc-fe08-59bc-8eb2-33d8a90b7ab3", 00:17:02.790 "is_configured": true, 00:17:02.790 "data_offset": 2048, 00:17:02.790 "data_size": 63488 00:17:02.790 }, 00:17:02.790 { 00:17:02.790 "name": "BaseBdev3", 00:17:02.790 "uuid": "7bf7c6fb-b7f6-5845-b499-ad5b706439c1", 00:17:02.790 "is_configured": true, 00:17:02.790 "data_offset": 2048, 00:17:02.790 "data_size": 63488 00:17:02.790 }, 00:17:02.790 { 00:17:02.790 "name": "BaseBdev4", 00:17:02.790 "uuid": "af4c7f78-51f4-5c4d-9579-61dd73074358", 00:17:02.790 "is_configured": true, 00:17:02.790 "data_offset": 2048, 00:17:02.790 "data_size": 63488 00:17:02.790 } 00:17:02.790 ] 00:17:02.790 }' 00:17:02.790 11:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.790 11:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:02.790 11:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.790 11:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:02.790 11:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:03.730 11:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:03.730 11:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.730 11:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.730 11:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.730 11:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.730 11:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.730 11:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.730 11:02:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.730 11:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.730 11:02:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.730 11:02:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.730 11:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.730 "name": "raid_bdev1", 00:17:03.730 "uuid": "91bedc82-88bd-4651-a81b-e953cf8615e4", 00:17:03.730 "strip_size_kb": 64, 00:17:03.730 "state": "online", 00:17:03.730 "raid_level": "raid5f", 00:17:03.730 "superblock": true, 00:17:03.730 "num_base_bdevs": 4, 00:17:03.730 "num_base_bdevs_discovered": 4, 00:17:03.730 "num_base_bdevs_operational": 4, 00:17:03.730 "process": { 00:17:03.730 "type": "rebuild", 00:17:03.730 "target": "spare", 00:17:03.730 "progress": { 00:17:03.730 "blocks": 65280, 00:17:03.730 "percent": 34 00:17:03.730 } 00:17:03.730 }, 00:17:03.730 "base_bdevs_list": [ 00:17:03.730 { 00:17:03.730 "name": "spare", 00:17:03.730 "uuid": "c0c50d01-8172-56b9-9ad0-c496e3608db1", 00:17:03.730 "is_configured": true, 00:17:03.730 "data_offset": 2048, 00:17:03.730 "data_size": 63488 00:17:03.730 }, 00:17:03.730 { 00:17:03.730 "name": "BaseBdev2", 00:17:03.730 "uuid": "9e8c19dc-fe08-59bc-8eb2-33d8a90b7ab3", 00:17:03.730 "is_configured": true, 00:17:03.730 "data_offset": 2048, 00:17:03.730 "data_size": 63488 00:17:03.730 }, 00:17:03.730 { 00:17:03.730 "name": "BaseBdev3", 00:17:03.730 "uuid": "7bf7c6fb-b7f6-5845-b499-ad5b706439c1", 00:17:03.730 "is_configured": true, 00:17:03.730 "data_offset": 2048, 00:17:03.730 "data_size": 63488 00:17:03.730 }, 00:17:03.730 { 00:17:03.730 "name": "BaseBdev4", 00:17:03.730 "uuid": "af4c7f78-51f4-5c4d-9579-61dd73074358", 00:17:03.730 "is_configured": true, 00:17:03.730 "data_offset": 2048, 00:17:03.730 "data_size": 63488 00:17:03.730 } 00:17:03.730 ] 00:17:03.730 }' 00:17:03.730 11:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.730 11:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:03.730 11:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.730 11:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.730 11:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:05.111 11:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:05.111 11:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:05.111 11:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.111 11:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:05.111 11:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:05.111 11:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.111 11:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.111 11:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.111 11:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.111 11:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.111 11:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.111 11:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.111 "name": "raid_bdev1", 00:17:05.111 "uuid": "91bedc82-88bd-4651-a81b-e953cf8615e4", 00:17:05.111 "strip_size_kb": 64, 00:17:05.111 "state": "online", 00:17:05.111 "raid_level": "raid5f", 00:17:05.111 "superblock": true, 00:17:05.111 "num_base_bdevs": 4, 00:17:05.111 "num_base_bdevs_discovered": 4, 00:17:05.111 "num_base_bdevs_operational": 4, 00:17:05.111 "process": { 00:17:05.111 "type": "rebuild", 00:17:05.111 "target": "spare", 00:17:05.111 "progress": { 00:17:05.111 "blocks": 86400, 00:17:05.111 "percent": 45 00:17:05.111 } 00:17:05.111 }, 00:17:05.111 "base_bdevs_list": [ 00:17:05.111 { 00:17:05.111 "name": "spare", 00:17:05.111 "uuid": "c0c50d01-8172-56b9-9ad0-c496e3608db1", 00:17:05.111 "is_configured": true, 00:17:05.111 "data_offset": 2048, 00:17:05.111 "data_size": 63488 00:17:05.111 }, 00:17:05.111 { 00:17:05.111 "name": "BaseBdev2", 00:17:05.111 "uuid": "9e8c19dc-fe08-59bc-8eb2-33d8a90b7ab3", 00:17:05.111 "is_configured": true, 00:17:05.111 "data_offset": 2048, 00:17:05.111 "data_size": 63488 00:17:05.111 }, 00:17:05.111 { 00:17:05.111 "name": "BaseBdev3", 00:17:05.111 "uuid": "7bf7c6fb-b7f6-5845-b499-ad5b706439c1", 00:17:05.111 "is_configured": true, 00:17:05.111 "data_offset": 2048, 00:17:05.111 "data_size": 63488 00:17:05.111 }, 00:17:05.111 { 00:17:05.111 "name": "BaseBdev4", 00:17:05.111 "uuid": "af4c7f78-51f4-5c4d-9579-61dd73074358", 00:17:05.111 "is_configured": true, 00:17:05.111 "data_offset": 2048, 00:17:05.111 "data_size": 63488 00:17:05.111 } 00:17:05.111 ] 00:17:05.111 }' 00:17:05.111 11:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.111 11:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:05.111 11:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.111 11:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:05.111 11:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:06.050 11:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:06.050 11:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.050 11:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.050 11:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.050 11:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.050 11:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.050 11:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.050 11:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.050 11:02:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.050 11:02:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.050 11:02:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.050 11:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.050 "name": "raid_bdev1", 00:17:06.050 "uuid": "91bedc82-88bd-4651-a81b-e953cf8615e4", 00:17:06.050 "strip_size_kb": 64, 00:17:06.050 "state": "online", 00:17:06.050 "raid_level": "raid5f", 00:17:06.050 "superblock": true, 00:17:06.050 "num_base_bdevs": 4, 00:17:06.050 "num_base_bdevs_discovered": 4, 00:17:06.050 "num_base_bdevs_operational": 4, 00:17:06.050 "process": { 00:17:06.050 "type": "rebuild", 00:17:06.050 "target": "spare", 00:17:06.050 "progress": { 00:17:06.050 "blocks": 109440, 00:17:06.050 "percent": 57 00:17:06.050 } 00:17:06.050 }, 00:17:06.050 "base_bdevs_list": [ 00:17:06.050 { 00:17:06.050 "name": "spare", 00:17:06.050 "uuid": "c0c50d01-8172-56b9-9ad0-c496e3608db1", 00:17:06.050 "is_configured": true, 00:17:06.050 "data_offset": 2048, 00:17:06.050 "data_size": 63488 00:17:06.050 }, 00:17:06.050 { 00:17:06.050 "name": "BaseBdev2", 00:17:06.050 "uuid": "9e8c19dc-fe08-59bc-8eb2-33d8a90b7ab3", 00:17:06.050 "is_configured": true, 00:17:06.050 "data_offset": 2048, 00:17:06.050 "data_size": 63488 00:17:06.050 }, 00:17:06.050 { 00:17:06.050 "name": "BaseBdev3", 00:17:06.050 "uuid": "7bf7c6fb-b7f6-5845-b499-ad5b706439c1", 00:17:06.050 "is_configured": true, 00:17:06.050 "data_offset": 2048, 00:17:06.050 "data_size": 63488 00:17:06.050 }, 00:17:06.050 { 00:17:06.050 "name": "BaseBdev4", 00:17:06.050 "uuid": "af4c7f78-51f4-5c4d-9579-61dd73074358", 00:17:06.050 "is_configured": true, 00:17:06.050 "data_offset": 2048, 00:17:06.050 "data_size": 63488 00:17:06.050 } 00:17:06.050 ] 00:17:06.050 }' 00:17:06.050 11:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.051 11:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.051 11:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.051 11:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.051 11:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:07.431 11:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:07.431 11:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:07.431 11:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.431 11:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:07.431 11:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:07.431 11:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.431 11:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.431 11:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.431 11:02:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.431 11:02:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.431 11:02:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.431 11:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.431 "name": "raid_bdev1", 00:17:07.431 "uuid": "91bedc82-88bd-4651-a81b-e953cf8615e4", 00:17:07.431 "strip_size_kb": 64, 00:17:07.431 "state": "online", 00:17:07.431 "raid_level": "raid5f", 00:17:07.431 "superblock": true, 00:17:07.431 "num_base_bdevs": 4, 00:17:07.431 "num_base_bdevs_discovered": 4, 00:17:07.431 "num_base_bdevs_operational": 4, 00:17:07.431 "process": { 00:17:07.431 "type": "rebuild", 00:17:07.431 "target": "spare", 00:17:07.431 "progress": { 00:17:07.431 "blocks": 130560, 00:17:07.431 "percent": 68 00:17:07.431 } 00:17:07.431 }, 00:17:07.431 "base_bdevs_list": [ 00:17:07.431 { 00:17:07.431 "name": "spare", 00:17:07.431 "uuid": "c0c50d01-8172-56b9-9ad0-c496e3608db1", 00:17:07.431 "is_configured": true, 00:17:07.431 "data_offset": 2048, 00:17:07.431 "data_size": 63488 00:17:07.431 }, 00:17:07.431 { 00:17:07.431 "name": "BaseBdev2", 00:17:07.431 "uuid": "9e8c19dc-fe08-59bc-8eb2-33d8a90b7ab3", 00:17:07.431 "is_configured": true, 00:17:07.431 "data_offset": 2048, 00:17:07.431 "data_size": 63488 00:17:07.431 }, 00:17:07.431 { 00:17:07.431 "name": "BaseBdev3", 00:17:07.431 "uuid": "7bf7c6fb-b7f6-5845-b499-ad5b706439c1", 00:17:07.431 "is_configured": true, 00:17:07.431 "data_offset": 2048, 00:17:07.431 "data_size": 63488 00:17:07.431 }, 00:17:07.431 { 00:17:07.431 "name": "BaseBdev4", 00:17:07.431 "uuid": "af4c7f78-51f4-5c4d-9579-61dd73074358", 00:17:07.431 "is_configured": true, 00:17:07.431 "data_offset": 2048, 00:17:07.431 "data_size": 63488 00:17:07.431 } 00:17:07.431 ] 00:17:07.431 }' 00:17:07.431 11:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.431 11:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:07.431 11:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.431 11:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:07.431 11:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:08.375 11:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:08.375 11:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.375 11:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.375 11:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.375 11:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.375 11:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.375 11:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.375 11:02:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.375 11:02:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.375 11:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.375 11:02:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.375 11:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.375 "name": "raid_bdev1", 00:17:08.375 "uuid": "91bedc82-88bd-4651-a81b-e953cf8615e4", 00:17:08.375 "strip_size_kb": 64, 00:17:08.375 "state": "online", 00:17:08.375 "raid_level": "raid5f", 00:17:08.375 "superblock": true, 00:17:08.375 "num_base_bdevs": 4, 00:17:08.375 "num_base_bdevs_discovered": 4, 00:17:08.375 "num_base_bdevs_operational": 4, 00:17:08.375 "process": { 00:17:08.375 "type": "rebuild", 00:17:08.375 "target": "spare", 00:17:08.375 "progress": { 00:17:08.375 "blocks": 153600, 00:17:08.375 "percent": 80 00:17:08.375 } 00:17:08.375 }, 00:17:08.375 "base_bdevs_list": [ 00:17:08.375 { 00:17:08.375 "name": "spare", 00:17:08.375 "uuid": "c0c50d01-8172-56b9-9ad0-c496e3608db1", 00:17:08.375 "is_configured": true, 00:17:08.375 "data_offset": 2048, 00:17:08.375 "data_size": 63488 00:17:08.375 }, 00:17:08.375 { 00:17:08.375 "name": "BaseBdev2", 00:17:08.375 "uuid": "9e8c19dc-fe08-59bc-8eb2-33d8a90b7ab3", 00:17:08.375 "is_configured": true, 00:17:08.375 "data_offset": 2048, 00:17:08.375 "data_size": 63488 00:17:08.375 }, 00:17:08.375 { 00:17:08.375 "name": "BaseBdev3", 00:17:08.375 "uuid": "7bf7c6fb-b7f6-5845-b499-ad5b706439c1", 00:17:08.375 "is_configured": true, 00:17:08.375 "data_offset": 2048, 00:17:08.375 "data_size": 63488 00:17:08.375 }, 00:17:08.375 { 00:17:08.375 "name": "BaseBdev4", 00:17:08.375 "uuid": "af4c7f78-51f4-5c4d-9579-61dd73074358", 00:17:08.375 "is_configured": true, 00:17:08.375 "data_offset": 2048, 00:17:08.375 "data_size": 63488 00:17:08.375 } 00:17:08.375 ] 00:17:08.375 }' 00:17:08.375 11:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.375 11:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.375 11:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.375 11:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.375 11:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:09.755 11:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:09.755 11:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.755 11:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.755 11:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.756 11:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.756 11:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.756 11:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.756 11:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.756 11:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.756 11:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.756 11:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.756 11:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.756 "name": "raid_bdev1", 00:17:09.756 "uuid": "91bedc82-88bd-4651-a81b-e953cf8615e4", 00:17:09.756 "strip_size_kb": 64, 00:17:09.756 "state": "online", 00:17:09.756 "raid_level": "raid5f", 00:17:09.756 "superblock": true, 00:17:09.756 "num_base_bdevs": 4, 00:17:09.756 "num_base_bdevs_discovered": 4, 00:17:09.756 "num_base_bdevs_operational": 4, 00:17:09.756 "process": { 00:17:09.756 "type": "rebuild", 00:17:09.756 "target": "spare", 00:17:09.756 "progress": { 00:17:09.756 "blocks": 174720, 00:17:09.756 "percent": 91 00:17:09.756 } 00:17:09.756 }, 00:17:09.756 "base_bdevs_list": [ 00:17:09.756 { 00:17:09.756 "name": "spare", 00:17:09.756 "uuid": "c0c50d01-8172-56b9-9ad0-c496e3608db1", 00:17:09.756 "is_configured": true, 00:17:09.756 "data_offset": 2048, 00:17:09.756 "data_size": 63488 00:17:09.756 }, 00:17:09.756 { 00:17:09.756 "name": "BaseBdev2", 00:17:09.756 "uuid": "9e8c19dc-fe08-59bc-8eb2-33d8a90b7ab3", 00:17:09.756 "is_configured": true, 00:17:09.756 "data_offset": 2048, 00:17:09.756 "data_size": 63488 00:17:09.756 }, 00:17:09.756 { 00:17:09.756 "name": "BaseBdev3", 00:17:09.756 "uuid": "7bf7c6fb-b7f6-5845-b499-ad5b706439c1", 00:17:09.756 "is_configured": true, 00:17:09.756 "data_offset": 2048, 00:17:09.756 "data_size": 63488 00:17:09.756 }, 00:17:09.756 { 00:17:09.756 "name": "BaseBdev4", 00:17:09.756 "uuid": "af4c7f78-51f4-5c4d-9579-61dd73074358", 00:17:09.756 "is_configured": true, 00:17:09.756 "data_offset": 2048, 00:17:09.756 "data_size": 63488 00:17:09.756 } 00:17:09.756 ] 00:17:09.756 }' 00:17:09.756 11:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.756 11:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:09.756 11:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.756 11:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.756 11:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:10.328 [2024-11-15 11:02:17.097891] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:10.328 [2024-11-15 11:02:17.097976] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:10.328 [2024-11-15 11:02:17.098138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.587 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:10.587 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.587 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.587 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.587 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.587 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.587 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.587 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.587 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.587 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.588 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.588 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.588 "name": "raid_bdev1", 00:17:10.588 "uuid": "91bedc82-88bd-4651-a81b-e953cf8615e4", 00:17:10.588 "strip_size_kb": 64, 00:17:10.588 "state": "online", 00:17:10.588 "raid_level": "raid5f", 00:17:10.588 "superblock": true, 00:17:10.588 "num_base_bdevs": 4, 00:17:10.588 "num_base_bdevs_discovered": 4, 00:17:10.588 "num_base_bdevs_operational": 4, 00:17:10.588 "base_bdevs_list": [ 00:17:10.588 { 00:17:10.588 "name": "spare", 00:17:10.588 "uuid": "c0c50d01-8172-56b9-9ad0-c496e3608db1", 00:17:10.588 "is_configured": true, 00:17:10.588 "data_offset": 2048, 00:17:10.588 "data_size": 63488 00:17:10.588 }, 00:17:10.588 { 00:17:10.588 "name": "BaseBdev2", 00:17:10.588 "uuid": "9e8c19dc-fe08-59bc-8eb2-33d8a90b7ab3", 00:17:10.588 "is_configured": true, 00:17:10.588 "data_offset": 2048, 00:17:10.588 "data_size": 63488 00:17:10.588 }, 00:17:10.588 { 00:17:10.588 "name": "BaseBdev3", 00:17:10.588 "uuid": "7bf7c6fb-b7f6-5845-b499-ad5b706439c1", 00:17:10.588 "is_configured": true, 00:17:10.588 "data_offset": 2048, 00:17:10.588 "data_size": 63488 00:17:10.588 }, 00:17:10.588 { 00:17:10.588 "name": "BaseBdev4", 00:17:10.588 "uuid": "af4c7f78-51f4-5c4d-9579-61dd73074358", 00:17:10.588 "is_configured": true, 00:17:10.588 "data_offset": 2048, 00:17:10.588 "data_size": 63488 00:17:10.588 } 00:17:10.588 ] 00:17:10.588 }' 00:17:10.588 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.848 "name": "raid_bdev1", 00:17:10.848 "uuid": "91bedc82-88bd-4651-a81b-e953cf8615e4", 00:17:10.848 "strip_size_kb": 64, 00:17:10.848 "state": "online", 00:17:10.848 "raid_level": "raid5f", 00:17:10.848 "superblock": true, 00:17:10.848 "num_base_bdevs": 4, 00:17:10.848 "num_base_bdevs_discovered": 4, 00:17:10.848 "num_base_bdevs_operational": 4, 00:17:10.848 "base_bdevs_list": [ 00:17:10.848 { 00:17:10.848 "name": "spare", 00:17:10.848 "uuid": "c0c50d01-8172-56b9-9ad0-c496e3608db1", 00:17:10.848 "is_configured": true, 00:17:10.848 "data_offset": 2048, 00:17:10.848 "data_size": 63488 00:17:10.848 }, 00:17:10.848 { 00:17:10.848 "name": "BaseBdev2", 00:17:10.848 "uuid": "9e8c19dc-fe08-59bc-8eb2-33d8a90b7ab3", 00:17:10.848 "is_configured": true, 00:17:10.848 "data_offset": 2048, 00:17:10.848 "data_size": 63488 00:17:10.848 }, 00:17:10.848 { 00:17:10.848 "name": "BaseBdev3", 00:17:10.848 "uuid": "7bf7c6fb-b7f6-5845-b499-ad5b706439c1", 00:17:10.848 "is_configured": true, 00:17:10.848 "data_offset": 2048, 00:17:10.848 "data_size": 63488 00:17:10.848 }, 00:17:10.848 { 00:17:10.848 "name": "BaseBdev4", 00:17:10.848 "uuid": "af4c7f78-51f4-5c4d-9579-61dd73074358", 00:17:10.848 "is_configured": true, 00:17:10.848 "data_offset": 2048, 00:17:10.848 "data_size": 63488 00:17:10.848 } 00:17:10.848 ] 00:17:10.848 }' 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.848 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.112 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.112 "name": "raid_bdev1", 00:17:11.112 "uuid": "91bedc82-88bd-4651-a81b-e953cf8615e4", 00:17:11.112 "strip_size_kb": 64, 00:17:11.112 "state": "online", 00:17:11.112 "raid_level": "raid5f", 00:17:11.112 "superblock": true, 00:17:11.112 "num_base_bdevs": 4, 00:17:11.112 "num_base_bdevs_discovered": 4, 00:17:11.112 "num_base_bdevs_operational": 4, 00:17:11.112 "base_bdevs_list": [ 00:17:11.112 { 00:17:11.112 "name": "spare", 00:17:11.112 "uuid": "c0c50d01-8172-56b9-9ad0-c496e3608db1", 00:17:11.112 "is_configured": true, 00:17:11.112 "data_offset": 2048, 00:17:11.112 "data_size": 63488 00:17:11.112 }, 00:17:11.112 { 00:17:11.112 "name": "BaseBdev2", 00:17:11.112 "uuid": "9e8c19dc-fe08-59bc-8eb2-33d8a90b7ab3", 00:17:11.112 "is_configured": true, 00:17:11.112 "data_offset": 2048, 00:17:11.112 "data_size": 63488 00:17:11.112 }, 00:17:11.112 { 00:17:11.112 "name": "BaseBdev3", 00:17:11.112 "uuid": "7bf7c6fb-b7f6-5845-b499-ad5b706439c1", 00:17:11.112 "is_configured": true, 00:17:11.112 "data_offset": 2048, 00:17:11.112 "data_size": 63488 00:17:11.112 }, 00:17:11.112 { 00:17:11.112 "name": "BaseBdev4", 00:17:11.112 "uuid": "af4c7f78-51f4-5c4d-9579-61dd73074358", 00:17:11.112 "is_configured": true, 00:17:11.112 "data_offset": 2048, 00:17:11.112 "data_size": 63488 00:17:11.112 } 00:17:11.112 ] 00:17:11.112 }' 00:17:11.112 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.112 11:02:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.372 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:11.372 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.372 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.372 [2024-11-15 11:02:18.185797] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:11.373 [2024-11-15 11:02:18.185831] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:11.373 [2024-11-15 11:02:18.185917] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:11.373 [2024-11-15 11:02:18.186019] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:11.373 [2024-11-15 11:02:18.186043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:11.373 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.373 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.373 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.373 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:11.373 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.373 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.373 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:11.373 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:11.373 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:11.373 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:11.373 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:11.373 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:11.373 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:11.373 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:11.373 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:11.373 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:11.373 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:11.373 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:11.373 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:11.633 /dev/nbd0 00:17:11.633 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:11.633 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:11.633 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:11.633 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:17:11.633 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:11.633 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:11.633 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:11.633 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:17:11.633 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:11.633 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:11.633 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:11.633 1+0 records in 00:17:11.633 1+0 records out 00:17:11.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504274 s, 8.1 MB/s 00:17:11.633 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.633 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:17:11.633 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.633 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:11.633 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:17:11.633 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:11.633 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:11.633 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:11.893 /dev/nbd1 00:17:11.893 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:11.893 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:11.893 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:17:11.893 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:17:11.893 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:11.893 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:11.893 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:17:11.893 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:17:11.893 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:11.893 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:11.893 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:11.893 1+0 records in 00:17:11.893 1+0 records out 00:17:11.893 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455724 s, 9.0 MB/s 00:17:11.893 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.893 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:17:11.893 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.893 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:11.893 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:17:11.893 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:11.893 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:11.893 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:12.153 11:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:12.153 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:12.153 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:12.153 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:12.153 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:12.153 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:12.153 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:12.413 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:12.413 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:12.413 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:12.413 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:12.413 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:12.413 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:12.413 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:12.413 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:12.413 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:12.413 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:12.672 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:12.672 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:12.672 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:12.672 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:12.672 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:12.672 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:12.672 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:12.672 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:12.672 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:12.672 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:12.672 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.672 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.672 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.672 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:12.672 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.672 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.672 [2024-11-15 11:02:19.500956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:12.672 [2024-11-15 11:02:19.501092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.672 [2024-11-15 11:02:19.501146] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:12.672 [2024-11-15 11:02:19.501191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.672 [2024-11-15 11:02:19.503774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.672 [2024-11-15 11:02:19.503852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:12.672 [2024-11-15 11:02:19.503992] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:12.672 [2024-11-15 11:02:19.504094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:12.672 [2024-11-15 11:02:19.504319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:12.672 [2024-11-15 11:02:19.504490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:12.672 [2024-11-15 11:02:19.504631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:12.672 spare 00:17:12.672 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.672 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:12.672 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.672 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.932 [2024-11-15 11:02:19.604605] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:12.932 [2024-11-15 11:02:19.604663] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:12.932 [2024-11-15 11:02:19.605007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:12.932 [2024-11-15 11:02:19.613464] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:12.932 [2024-11-15 11:02:19.613491] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:12.932 [2024-11-15 11:02:19.613742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.932 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.932 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:12.932 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.932 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.932 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:12.932 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:12.932 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:12.932 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.932 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.932 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.932 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.932 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.932 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.932 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.932 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.932 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.932 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.932 "name": "raid_bdev1", 00:17:12.932 "uuid": "91bedc82-88bd-4651-a81b-e953cf8615e4", 00:17:12.932 "strip_size_kb": 64, 00:17:12.932 "state": "online", 00:17:12.932 "raid_level": "raid5f", 00:17:12.932 "superblock": true, 00:17:12.932 "num_base_bdevs": 4, 00:17:12.932 "num_base_bdevs_discovered": 4, 00:17:12.932 "num_base_bdevs_operational": 4, 00:17:12.932 "base_bdevs_list": [ 00:17:12.932 { 00:17:12.932 "name": "spare", 00:17:12.932 "uuid": "c0c50d01-8172-56b9-9ad0-c496e3608db1", 00:17:12.932 "is_configured": true, 00:17:12.932 "data_offset": 2048, 00:17:12.932 "data_size": 63488 00:17:12.932 }, 00:17:12.932 { 00:17:12.932 "name": "BaseBdev2", 00:17:12.932 "uuid": "9e8c19dc-fe08-59bc-8eb2-33d8a90b7ab3", 00:17:12.932 "is_configured": true, 00:17:12.932 "data_offset": 2048, 00:17:12.932 "data_size": 63488 00:17:12.932 }, 00:17:12.932 { 00:17:12.932 "name": "BaseBdev3", 00:17:12.932 "uuid": "7bf7c6fb-b7f6-5845-b499-ad5b706439c1", 00:17:12.932 "is_configured": true, 00:17:12.932 "data_offset": 2048, 00:17:12.932 "data_size": 63488 00:17:12.932 }, 00:17:12.932 { 00:17:12.932 "name": "BaseBdev4", 00:17:12.932 "uuid": "af4c7f78-51f4-5c4d-9579-61dd73074358", 00:17:12.932 "is_configured": true, 00:17:12.932 "data_offset": 2048, 00:17:12.932 "data_size": 63488 00:17:12.932 } 00:17:12.932 ] 00:17:12.932 }' 00:17:12.932 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.932 11:02:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.192 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:13.192 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.192 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:13.192 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:13.192 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.192 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.192 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.192 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.192 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.452 "name": "raid_bdev1", 00:17:13.452 "uuid": "91bedc82-88bd-4651-a81b-e953cf8615e4", 00:17:13.452 "strip_size_kb": 64, 00:17:13.452 "state": "online", 00:17:13.452 "raid_level": "raid5f", 00:17:13.452 "superblock": true, 00:17:13.452 "num_base_bdevs": 4, 00:17:13.452 "num_base_bdevs_discovered": 4, 00:17:13.452 "num_base_bdevs_operational": 4, 00:17:13.452 "base_bdevs_list": [ 00:17:13.452 { 00:17:13.452 "name": "spare", 00:17:13.452 "uuid": "c0c50d01-8172-56b9-9ad0-c496e3608db1", 00:17:13.452 "is_configured": true, 00:17:13.452 "data_offset": 2048, 00:17:13.452 "data_size": 63488 00:17:13.452 }, 00:17:13.452 { 00:17:13.452 "name": "BaseBdev2", 00:17:13.452 "uuid": "9e8c19dc-fe08-59bc-8eb2-33d8a90b7ab3", 00:17:13.452 "is_configured": true, 00:17:13.452 "data_offset": 2048, 00:17:13.452 "data_size": 63488 00:17:13.452 }, 00:17:13.452 { 00:17:13.452 "name": "BaseBdev3", 00:17:13.452 "uuid": "7bf7c6fb-b7f6-5845-b499-ad5b706439c1", 00:17:13.452 "is_configured": true, 00:17:13.452 "data_offset": 2048, 00:17:13.452 "data_size": 63488 00:17:13.452 }, 00:17:13.452 { 00:17:13.452 "name": "BaseBdev4", 00:17:13.452 "uuid": "af4c7f78-51f4-5c4d-9579-61dd73074358", 00:17:13.452 "is_configured": true, 00:17:13.452 "data_offset": 2048, 00:17:13.452 "data_size": 63488 00:17:13.452 } 00:17:13.452 ] 00:17:13.452 }' 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.452 [2024-11-15 11:02:20.305426] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.452 "name": "raid_bdev1", 00:17:13.452 "uuid": "91bedc82-88bd-4651-a81b-e953cf8615e4", 00:17:13.452 "strip_size_kb": 64, 00:17:13.452 "state": "online", 00:17:13.452 "raid_level": "raid5f", 00:17:13.452 "superblock": true, 00:17:13.452 "num_base_bdevs": 4, 00:17:13.452 "num_base_bdevs_discovered": 3, 00:17:13.452 "num_base_bdevs_operational": 3, 00:17:13.452 "base_bdevs_list": [ 00:17:13.452 { 00:17:13.452 "name": null, 00:17:13.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.452 "is_configured": false, 00:17:13.452 "data_offset": 0, 00:17:13.452 "data_size": 63488 00:17:13.452 }, 00:17:13.452 { 00:17:13.452 "name": "BaseBdev2", 00:17:13.452 "uuid": "9e8c19dc-fe08-59bc-8eb2-33d8a90b7ab3", 00:17:13.452 "is_configured": true, 00:17:13.452 "data_offset": 2048, 00:17:13.452 "data_size": 63488 00:17:13.452 }, 00:17:13.452 { 00:17:13.452 "name": "BaseBdev3", 00:17:13.452 "uuid": "7bf7c6fb-b7f6-5845-b499-ad5b706439c1", 00:17:13.452 "is_configured": true, 00:17:13.452 "data_offset": 2048, 00:17:13.452 "data_size": 63488 00:17:13.452 }, 00:17:13.452 { 00:17:13.452 "name": "BaseBdev4", 00:17:13.452 "uuid": "af4c7f78-51f4-5c4d-9579-61dd73074358", 00:17:13.452 "is_configured": true, 00:17:13.452 "data_offset": 2048, 00:17:13.452 "data_size": 63488 00:17:13.452 } 00:17:13.452 ] 00:17:13.452 }' 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.452 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.027 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:14.027 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.027 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.027 [2024-11-15 11:02:20.752678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:14.027 [2024-11-15 11:02:20.752961] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:14.027 [2024-11-15 11:02:20.753051] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:14.027 [2024-11-15 11:02:20.753146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:14.027 [2024-11-15 11:02:20.768025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:14.027 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.027 11:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:14.027 [2024-11-15 11:02:20.776657] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:14.965 11:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.965 11:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.965 11:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.965 11:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.966 11:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.966 11:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.966 11:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.966 11:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.966 11:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.966 11:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.966 11:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.966 "name": "raid_bdev1", 00:17:14.966 "uuid": "91bedc82-88bd-4651-a81b-e953cf8615e4", 00:17:14.966 "strip_size_kb": 64, 00:17:14.966 "state": "online", 00:17:14.966 "raid_level": "raid5f", 00:17:14.966 "superblock": true, 00:17:14.966 "num_base_bdevs": 4, 00:17:14.966 "num_base_bdevs_discovered": 4, 00:17:14.966 "num_base_bdevs_operational": 4, 00:17:14.966 "process": { 00:17:14.966 "type": "rebuild", 00:17:14.966 "target": "spare", 00:17:14.966 "progress": { 00:17:14.966 "blocks": 19200, 00:17:14.966 "percent": 10 00:17:14.966 } 00:17:14.966 }, 00:17:14.966 "base_bdevs_list": [ 00:17:14.966 { 00:17:14.966 "name": "spare", 00:17:14.966 "uuid": "c0c50d01-8172-56b9-9ad0-c496e3608db1", 00:17:14.966 "is_configured": true, 00:17:14.966 "data_offset": 2048, 00:17:14.966 "data_size": 63488 00:17:14.966 }, 00:17:14.966 { 00:17:14.966 "name": "BaseBdev2", 00:17:14.966 "uuid": "9e8c19dc-fe08-59bc-8eb2-33d8a90b7ab3", 00:17:14.966 "is_configured": true, 00:17:14.966 "data_offset": 2048, 00:17:14.966 "data_size": 63488 00:17:14.966 }, 00:17:14.966 { 00:17:14.966 "name": "BaseBdev3", 00:17:14.966 "uuid": "7bf7c6fb-b7f6-5845-b499-ad5b706439c1", 00:17:14.966 "is_configured": true, 00:17:14.966 "data_offset": 2048, 00:17:14.966 "data_size": 63488 00:17:14.966 }, 00:17:14.966 { 00:17:14.966 "name": "BaseBdev4", 00:17:14.966 "uuid": "af4c7f78-51f4-5c4d-9579-61dd73074358", 00:17:14.966 "is_configured": true, 00:17:14.966 "data_offset": 2048, 00:17:14.966 "data_size": 63488 00:17:14.966 } 00:17:14.966 ] 00:17:14.966 }' 00:17:14.966 11:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.966 11:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:14.966 11:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.227 11:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.227 11:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:15.227 11:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.227 11:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.227 [2024-11-15 11:02:21.929067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:15.227 [2024-11-15 11:02:21.985096] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:15.227 [2024-11-15 11:02:21.985197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.227 [2024-11-15 11:02:21.985217] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:15.227 [2024-11-15 11:02:21.985226] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:15.227 11:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.227 11:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:15.227 11:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.227 11:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.227 11:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:15.227 11:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.227 11:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:15.227 11:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.227 11:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.227 11:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.227 11:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.227 11:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.227 11:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.227 11:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.227 11:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.227 11:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.227 11:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.227 "name": "raid_bdev1", 00:17:15.227 "uuid": "91bedc82-88bd-4651-a81b-e953cf8615e4", 00:17:15.227 "strip_size_kb": 64, 00:17:15.227 "state": "online", 00:17:15.227 "raid_level": "raid5f", 00:17:15.227 "superblock": true, 00:17:15.227 "num_base_bdevs": 4, 00:17:15.227 "num_base_bdevs_discovered": 3, 00:17:15.227 "num_base_bdevs_operational": 3, 00:17:15.227 "base_bdevs_list": [ 00:17:15.227 { 00:17:15.227 "name": null, 00:17:15.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.227 "is_configured": false, 00:17:15.227 "data_offset": 0, 00:17:15.227 "data_size": 63488 00:17:15.227 }, 00:17:15.227 { 00:17:15.227 "name": "BaseBdev2", 00:17:15.227 "uuid": "9e8c19dc-fe08-59bc-8eb2-33d8a90b7ab3", 00:17:15.227 "is_configured": true, 00:17:15.227 "data_offset": 2048, 00:17:15.227 "data_size": 63488 00:17:15.227 }, 00:17:15.227 { 00:17:15.227 "name": "BaseBdev3", 00:17:15.227 "uuid": "7bf7c6fb-b7f6-5845-b499-ad5b706439c1", 00:17:15.227 "is_configured": true, 00:17:15.227 "data_offset": 2048, 00:17:15.227 "data_size": 63488 00:17:15.227 }, 00:17:15.227 { 00:17:15.227 "name": "BaseBdev4", 00:17:15.227 "uuid": "af4c7f78-51f4-5c4d-9579-61dd73074358", 00:17:15.227 "is_configured": true, 00:17:15.227 "data_offset": 2048, 00:17:15.227 "data_size": 63488 00:17:15.227 } 00:17:15.227 ] 00:17:15.227 }' 00:17:15.227 11:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.227 11:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.796 11:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:15.796 11:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.796 11:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.796 [2024-11-15 11:02:22.487058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:15.796 [2024-11-15 11:02:22.487205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.796 [2024-11-15 11:02:22.487257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:15.796 [2024-11-15 11:02:22.487297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.796 [2024-11-15 11:02:22.487938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.796 [2024-11-15 11:02:22.488021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:15.796 [2024-11-15 11:02:22.488178] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:15.796 [2024-11-15 11:02:22.488233] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:15.796 [2024-11-15 11:02:22.488285] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:15.796 [2024-11-15 11:02:22.488364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:15.796 [2024-11-15 11:02:22.504980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:17:15.796 spare 00:17:15.796 11:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.796 11:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:15.796 [2024-11-15 11:02:22.515266] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:16.735 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.735 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.735 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.735 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.735 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.735 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.735 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.735 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.735 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.735 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.735 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.735 "name": "raid_bdev1", 00:17:16.735 "uuid": "91bedc82-88bd-4651-a81b-e953cf8615e4", 00:17:16.735 "strip_size_kb": 64, 00:17:16.735 "state": "online", 00:17:16.735 "raid_level": "raid5f", 00:17:16.735 "superblock": true, 00:17:16.735 "num_base_bdevs": 4, 00:17:16.735 "num_base_bdevs_discovered": 4, 00:17:16.735 "num_base_bdevs_operational": 4, 00:17:16.735 "process": { 00:17:16.735 "type": "rebuild", 00:17:16.735 "target": "spare", 00:17:16.735 "progress": { 00:17:16.735 "blocks": 19200, 00:17:16.735 "percent": 10 00:17:16.735 } 00:17:16.735 }, 00:17:16.735 "base_bdevs_list": [ 00:17:16.735 { 00:17:16.735 "name": "spare", 00:17:16.735 "uuid": "c0c50d01-8172-56b9-9ad0-c496e3608db1", 00:17:16.735 "is_configured": true, 00:17:16.735 "data_offset": 2048, 00:17:16.735 "data_size": 63488 00:17:16.735 }, 00:17:16.735 { 00:17:16.735 "name": "BaseBdev2", 00:17:16.735 "uuid": "9e8c19dc-fe08-59bc-8eb2-33d8a90b7ab3", 00:17:16.735 "is_configured": true, 00:17:16.735 "data_offset": 2048, 00:17:16.735 "data_size": 63488 00:17:16.735 }, 00:17:16.735 { 00:17:16.735 "name": "BaseBdev3", 00:17:16.735 "uuid": "7bf7c6fb-b7f6-5845-b499-ad5b706439c1", 00:17:16.735 "is_configured": true, 00:17:16.735 "data_offset": 2048, 00:17:16.735 "data_size": 63488 00:17:16.735 }, 00:17:16.735 { 00:17:16.735 "name": "BaseBdev4", 00:17:16.735 "uuid": "af4c7f78-51f4-5c4d-9579-61dd73074358", 00:17:16.735 "is_configured": true, 00:17:16.735 "data_offset": 2048, 00:17:16.735 "data_size": 63488 00:17:16.735 } 00:17:16.735 ] 00:17:16.735 }' 00:17:16.735 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.735 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.735 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.735 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.735 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:16.735 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.735 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.735 [2024-11-15 11:02:23.650018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:16.995 [2024-11-15 11:02:23.723373] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:16.995 [2024-11-15 11:02:23.723501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.995 [2024-11-15 11:02:23.723524] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:16.995 [2024-11-15 11:02:23.723532] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:16.995 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.995 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:16.995 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.995 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.995 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:16.995 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:16.995 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:16.995 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.995 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.995 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.995 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.995 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.995 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.995 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.995 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.995 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.995 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.995 "name": "raid_bdev1", 00:17:16.995 "uuid": "91bedc82-88bd-4651-a81b-e953cf8615e4", 00:17:16.995 "strip_size_kb": 64, 00:17:16.995 "state": "online", 00:17:16.995 "raid_level": "raid5f", 00:17:16.995 "superblock": true, 00:17:16.995 "num_base_bdevs": 4, 00:17:16.995 "num_base_bdevs_discovered": 3, 00:17:16.995 "num_base_bdevs_operational": 3, 00:17:16.995 "base_bdevs_list": [ 00:17:16.995 { 00:17:16.995 "name": null, 00:17:16.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.995 "is_configured": false, 00:17:16.995 "data_offset": 0, 00:17:16.995 "data_size": 63488 00:17:16.995 }, 00:17:16.995 { 00:17:16.995 "name": "BaseBdev2", 00:17:16.995 "uuid": "9e8c19dc-fe08-59bc-8eb2-33d8a90b7ab3", 00:17:16.995 "is_configured": true, 00:17:16.995 "data_offset": 2048, 00:17:16.995 "data_size": 63488 00:17:16.995 }, 00:17:16.995 { 00:17:16.995 "name": "BaseBdev3", 00:17:16.995 "uuid": "7bf7c6fb-b7f6-5845-b499-ad5b706439c1", 00:17:16.995 "is_configured": true, 00:17:16.995 "data_offset": 2048, 00:17:16.995 "data_size": 63488 00:17:16.995 }, 00:17:16.995 { 00:17:16.995 "name": "BaseBdev4", 00:17:16.995 "uuid": "af4c7f78-51f4-5c4d-9579-61dd73074358", 00:17:16.995 "is_configured": true, 00:17:16.995 "data_offset": 2048, 00:17:16.995 "data_size": 63488 00:17:16.995 } 00:17:16.995 ] 00:17:16.995 }' 00:17:16.995 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.995 11:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.565 11:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:17.565 11:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.565 11:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:17.565 11:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:17.565 11:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.565 11:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.565 11:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.565 11:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.565 11:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.565 11:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.565 11:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.565 "name": "raid_bdev1", 00:17:17.565 "uuid": "91bedc82-88bd-4651-a81b-e953cf8615e4", 00:17:17.565 "strip_size_kb": 64, 00:17:17.565 "state": "online", 00:17:17.565 "raid_level": "raid5f", 00:17:17.565 "superblock": true, 00:17:17.565 "num_base_bdevs": 4, 00:17:17.565 "num_base_bdevs_discovered": 3, 00:17:17.565 "num_base_bdevs_operational": 3, 00:17:17.565 "base_bdevs_list": [ 00:17:17.565 { 00:17:17.565 "name": null, 00:17:17.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.565 "is_configured": false, 00:17:17.565 "data_offset": 0, 00:17:17.565 "data_size": 63488 00:17:17.565 }, 00:17:17.565 { 00:17:17.565 "name": "BaseBdev2", 00:17:17.565 "uuid": "9e8c19dc-fe08-59bc-8eb2-33d8a90b7ab3", 00:17:17.565 "is_configured": true, 00:17:17.565 "data_offset": 2048, 00:17:17.565 "data_size": 63488 00:17:17.565 }, 00:17:17.565 { 00:17:17.565 "name": "BaseBdev3", 00:17:17.565 "uuid": "7bf7c6fb-b7f6-5845-b499-ad5b706439c1", 00:17:17.565 "is_configured": true, 00:17:17.565 "data_offset": 2048, 00:17:17.565 "data_size": 63488 00:17:17.565 }, 00:17:17.565 { 00:17:17.565 "name": "BaseBdev4", 00:17:17.565 "uuid": "af4c7f78-51f4-5c4d-9579-61dd73074358", 00:17:17.565 "is_configured": true, 00:17:17.565 "data_offset": 2048, 00:17:17.565 "data_size": 63488 00:17:17.565 } 00:17:17.565 ] 00:17:17.565 }' 00:17:17.565 11:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.565 11:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:17.565 11:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.565 11:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:17.565 11:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:17.565 11:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.565 11:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.565 11:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.565 11:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:17.565 11:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.565 11:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.565 [2024-11-15 11:02:24.353834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:17.565 [2024-11-15 11:02:24.353952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.565 [2024-11-15 11:02:24.354011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:17.565 [2024-11-15 11:02:24.354052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.565 [2024-11-15 11:02:24.354631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.565 [2024-11-15 11:02:24.354700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:17.565 [2024-11-15 11:02:24.354827] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:17.565 [2024-11-15 11:02:24.354877] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:17.565 [2024-11-15 11:02:24.354929] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:17.565 [2024-11-15 11:02:24.354975] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:17.565 BaseBdev1 00:17:17.565 11:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.565 11:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:18.508 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:18.508 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.508 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.508 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:18.508 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:18.508 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:18.508 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.508 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.508 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.508 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.508 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.508 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.508 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.508 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.508 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.508 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.508 "name": "raid_bdev1", 00:17:18.508 "uuid": "91bedc82-88bd-4651-a81b-e953cf8615e4", 00:17:18.508 "strip_size_kb": 64, 00:17:18.508 "state": "online", 00:17:18.508 "raid_level": "raid5f", 00:17:18.508 "superblock": true, 00:17:18.508 "num_base_bdevs": 4, 00:17:18.508 "num_base_bdevs_discovered": 3, 00:17:18.508 "num_base_bdevs_operational": 3, 00:17:18.508 "base_bdevs_list": [ 00:17:18.509 { 00:17:18.509 "name": null, 00:17:18.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.509 "is_configured": false, 00:17:18.509 "data_offset": 0, 00:17:18.509 "data_size": 63488 00:17:18.509 }, 00:17:18.509 { 00:17:18.509 "name": "BaseBdev2", 00:17:18.509 "uuid": "9e8c19dc-fe08-59bc-8eb2-33d8a90b7ab3", 00:17:18.509 "is_configured": true, 00:17:18.509 "data_offset": 2048, 00:17:18.509 "data_size": 63488 00:17:18.509 }, 00:17:18.509 { 00:17:18.509 "name": "BaseBdev3", 00:17:18.509 "uuid": "7bf7c6fb-b7f6-5845-b499-ad5b706439c1", 00:17:18.509 "is_configured": true, 00:17:18.509 "data_offset": 2048, 00:17:18.509 "data_size": 63488 00:17:18.509 }, 00:17:18.509 { 00:17:18.509 "name": "BaseBdev4", 00:17:18.509 "uuid": "af4c7f78-51f4-5c4d-9579-61dd73074358", 00:17:18.509 "is_configured": true, 00:17:18.509 "data_offset": 2048, 00:17:18.509 "data_size": 63488 00:17:18.509 } 00:17:18.509 ] 00:17:18.509 }' 00:17:18.509 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.509 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.198 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:19.198 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.198 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:19.198 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:19.198 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.198 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.198 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.198 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.198 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.198 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.198 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.198 "name": "raid_bdev1", 00:17:19.198 "uuid": "91bedc82-88bd-4651-a81b-e953cf8615e4", 00:17:19.198 "strip_size_kb": 64, 00:17:19.198 "state": "online", 00:17:19.198 "raid_level": "raid5f", 00:17:19.198 "superblock": true, 00:17:19.198 "num_base_bdevs": 4, 00:17:19.198 "num_base_bdevs_discovered": 3, 00:17:19.198 "num_base_bdevs_operational": 3, 00:17:19.198 "base_bdevs_list": [ 00:17:19.198 { 00:17:19.198 "name": null, 00:17:19.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.198 "is_configured": false, 00:17:19.198 "data_offset": 0, 00:17:19.198 "data_size": 63488 00:17:19.198 }, 00:17:19.198 { 00:17:19.198 "name": "BaseBdev2", 00:17:19.198 "uuid": "9e8c19dc-fe08-59bc-8eb2-33d8a90b7ab3", 00:17:19.198 "is_configured": true, 00:17:19.198 "data_offset": 2048, 00:17:19.198 "data_size": 63488 00:17:19.198 }, 00:17:19.198 { 00:17:19.198 "name": "BaseBdev3", 00:17:19.198 "uuid": "7bf7c6fb-b7f6-5845-b499-ad5b706439c1", 00:17:19.199 "is_configured": true, 00:17:19.199 "data_offset": 2048, 00:17:19.199 "data_size": 63488 00:17:19.199 }, 00:17:19.199 { 00:17:19.199 "name": "BaseBdev4", 00:17:19.199 "uuid": "af4c7f78-51f4-5c4d-9579-61dd73074358", 00:17:19.199 "is_configured": true, 00:17:19.199 "data_offset": 2048, 00:17:19.199 "data_size": 63488 00:17:19.199 } 00:17:19.199 ] 00:17:19.199 }' 00:17:19.199 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.199 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:19.199 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.199 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:19.199 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:19.199 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:17:19.199 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:19.199 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:19.199 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:19.199 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:19.199 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:19.199 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:19.199 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.199 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.199 [2024-11-15 11:02:25.923385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:19.199 [2024-11-15 11:02:25.923562] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:19.199 [2024-11-15 11:02:25.923581] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:19.199 request: 00:17:19.199 { 00:17:19.199 "base_bdev": "BaseBdev1", 00:17:19.199 "raid_bdev": "raid_bdev1", 00:17:19.199 "method": "bdev_raid_add_base_bdev", 00:17:19.199 "req_id": 1 00:17:19.199 } 00:17:19.199 Got JSON-RPC error response 00:17:19.199 response: 00:17:19.199 { 00:17:19.199 "code": -22, 00:17:19.199 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:19.199 } 00:17:19.199 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:19.199 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:17:19.199 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:19.199 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:19.199 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:19.199 11:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:20.137 11:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:20.137 11:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.137 11:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.137 11:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:20.137 11:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.137 11:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:20.137 11:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.137 11:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.137 11:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.137 11:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.137 11:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.137 11:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.137 11:02:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.137 11:02:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.137 11:02:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.137 11:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.137 "name": "raid_bdev1", 00:17:20.137 "uuid": "91bedc82-88bd-4651-a81b-e953cf8615e4", 00:17:20.137 "strip_size_kb": 64, 00:17:20.137 "state": "online", 00:17:20.137 "raid_level": "raid5f", 00:17:20.137 "superblock": true, 00:17:20.137 "num_base_bdevs": 4, 00:17:20.137 "num_base_bdevs_discovered": 3, 00:17:20.137 "num_base_bdevs_operational": 3, 00:17:20.137 "base_bdevs_list": [ 00:17:20.137 { 00:17:20.137 "name": null, 00:17:20.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.137 "is_configured": false, 00:17:20.137 "data_offset": 0, 00:17:20.137 "data_size": 63488 00:17:20.137 }, 00:17:20.137 { 00:17:20.137 "name": "BaseBdev2", 00:17:20.137 "uuid": "9e8c19dc-fe08-59bc-8eb2-33d8a90b7ab3", 00:17:20.137 "is_configured": true, 00:17:20.137 "data_offset": 2048, 00:17:20.137 "data_size": 63488 00:17:20.137 }, 00:17:20.137 { 00:17:20.137 "name": "BaseBdev3", 00:17:20.137 "uuid": "7bf7c6fb-b7f6-5845-b499-ad5b706439c1", 00:17:20.137 "is_configured": true, 00:17:20.137 "data_offset": 2048, 00:17:20.137 "data_size": 63488 00:17:20.137 }, 00:17:20.137 { 00:17:20.137 "name": "BaseBdev4", 00:17:20.137 "uuid": "af4c7f78-51f4-5c4d-9579-61dd73074358", 00:17:20.137 "is_configured": true, 00:17:20.137 "data_offset": 2048, 00:17:20.137 "data_size": 63488 00:17:20.137 } 00:17:20.137 ] 00:17:20.137 }' 00:17:20.137 11:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.137 11:02:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.706 11:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:20.706 11:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.706 11:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:20.706 11:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:20.706 11:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.706 11:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.706 11:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.706 11:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.706 11:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.706 11:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.706 11:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.706 "name": "raid_bdev1", 00:17:20.706 "uuid": "91bedc82-88bd-4651-a81b-e953cf8615e4", 00:17:20.706 "strip_size_kb": 64, 00:17:20.706 "state": "online", 00:17:20.706 "raid_level": "raid5f", 00:17:20.706 "superblock": true, 00:17:20.706 "num_base_bdevs": 4, 00:17:20.706 "num_base_bdevs_discovered": 3, 00:17:20.706 "num_base_bdevs_operational": 3, 00:17:20.706 "base_bdevs_list": [ 00:17:20.706 { 00:17:20.706 "name": null, 00:17:20.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.706 "is_configured": false, 00:17:20.706 "data_offset": 0, 00:17:20.706 "data_size": 63488 00:17:20.706 }, 00:17:20.706 { 00:17:20.706 "name": "BaseBdev2", 00:17:20.706 "uuid": "9e8c19dc-fe08-59bc-8eb2-33d8a90b7ab3", 00:17:20.706 "is_configured": true, 00:17:20.706 "data_offset": 2048, 00:17:20.706 "data_size": 63488 00:17:20.706 }, 00:17:20.706 { 00:17:20.706 "name": "BaseBdev3", 00:17:20.706 "uuid": "7bf7c6fb-b7f6-5845-b499-ad5b706439c1", 00:17:20.706 "is_configured": true, 00:17:20.706 "data_offset": 2048, 00:17:20.706 "data_size": 63488 00:17:20.706 }, 00:17:20.706 { 00:17:20.707 "name": "BaseBdev4", 00:17:20.707 "uuid": "af4c7f78-51f4-5c4d-9579-61dd73074358", 00:17:20.707 "is_configured": true, 00:17:20.707 "data_offset": 2048, 00:17:20.707 "data_size": 63488 00:17:20.707 } 00:17:20.707 ] 00:17:20.707 }' 00:17:20.707 11:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.707 11:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:20.707 11:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.707 11:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:20.707 11:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85293 00:17:20.707 11:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 85293 ']' 00:17:20.707 11:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 85293 00:17:20.707 11:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:17:20.707 11:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:20.707 11:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85293 00:17:20.707 11:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:20.707 11:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:20.707 killing process with pid 85293 00:17:20.707 Received shutdown signal, test time was about 60.000000 seconds 00:17:20.707 00:17:20.707 Latency(us) 00:17:20.707 [2024-11-15T11:02:27.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.707 [2024-11-15T11:02:27.635Z] =================================================================================================================== 00:17:20.707 [2024-11-15T11:02:27.635Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:20.707 11:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85293' 00:17:20.707 11:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 85293 00:17:20.707 [2024-11-15 11:02:27.618652] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:20.707 [2024-11-15 11:02:27.618791] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:20.707 11:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 85293 00:17:20.707 [2024-11-15 11:02:27.618880] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:20.707 [2024-11-15 11:02:27.618894] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:21.274 [2024-11-15 11:02:28.122360] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:22.654 11:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:22.654 00:17:22.654 real 0m27.496s 00:17:22.654 user 0m34.770s 00:17:22.654 sys 0m3.044s 00:17:22.654 11:02:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:22.654 ************************************ 00:17:22.654 END TEST raid5f_rebuild_test_sb 00:17:22.654 ************************************ 00:17:22.654 11:02:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.654 11:02:29 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:22.654 11:02:29 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:22.654 11:02:29 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:22.654 11:02:29 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:22.654 11:02:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:22.654 ************************************ 00:17:22.654 START TEST raid_state_function_test_sb_4k 00:17:22.654 ************************************ 00:17:22.654 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:17:22.654 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:22.654 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:22.654 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:22.654 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:22.654 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:22.654 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:22.654 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:22.654 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:22.654 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:22.654 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:22.654 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:22.654 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:22.654 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:22.654 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:22.654 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:22.654 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:22.654 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:22.654 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:22.654 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:22.654 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:22.654 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:22.654 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:22.655 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86113 00:17:22.655 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:22.655 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86113' 00:17:22.655 Process raid pid: 86113 00:17:22.655 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86113 00:17:22.655 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 86113 ']' 00:17:22.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.655 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.655 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:22.655 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.655 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:22.655 11:02:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.655 [2024-11-15 11:02:29.385049] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:17:22.655 [2024-11-15 11:02:29.385391] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.655 [2024-11-15 11:02:29.545613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.914 [2024-11-15 11:02:29.659532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.174 [2024-11-15 11:02:29.864184] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.174 [2024-11-15 11:02:29.864226] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.433 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:23.433 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:17:23.433 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:23.433 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.433 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.433 [2024-11-15 11:02:30.248313] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:23.433 [2024-11-15 11:02:30.248368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:23.433 [2024-11-15 11:02:30.248379] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:23.433 [2024-11-15 11:02:30.248396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:23.433 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.433 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:23.433 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:23.433 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:23.433 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.433 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.433 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:23.433 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.433 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.433 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.433 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.433 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.433 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.433 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.433 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.433 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.433 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.433 "name": "Existed_Raid", 00:17:23.433 "uuid": "9bd04b85-6054-4d5e-bd8a-1c59e39d0908", 00:17:23.433 "strip_size_kb": 0, 00:17:23.433 "state": "configuring", 00:17:23.433 "raid_level": "raid1", 00:17:23.433 "superblock": true, 00:17:23.433 "num_base_bdevs": 2, 00:17:23.433 "num_base_bdevs_discovered": 0, 00:17:23.433 "num_base_bdevs_operational": 2, 00:17:23.433 "base_bdevs_list": [ 00:17:23.433 { 00:17:23.433 "name": "BaseBdev1", 00:17:23.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.433 "is_configured": false, 00:17:23.433 "data_offset": 0, 00:17:23.433 "data_size": 0 00:17:23.433 }, 00:17:23.433 { 00:17:23.433 "name": "BaseBdev2", 00:17:23.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.433 "is_configured": false, 00:17:23.433 "data_offset": 0, 00:17:23.433 "data_size": 0 00:17:23.433 } 00:17:23.433 ] 00:17:23.433 }' 00:17:23.433 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.433 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.003 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:24.003 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.003 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.003 [2024-11-15 11:02:30.699482] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:24.003 [2024-11-15 11:02:30.699578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:24.003 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.003 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:24.003 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.003 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.003 [2024-11-15 11:02:30.711438] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:24.003 [2024-11-15 11:02:30.711531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:24.003 [2024-11-15 11:02:30.711575] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:24.003 [2024-11-15 11:02:30.711599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:24.003 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.003 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:24.003 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.003 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.003 [2024-11-15 11:02:30.761828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.003 BaseBdev1 00:17:24.003 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.003 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:24.003 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:24.003 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:24.003 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:17:24.003 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:24.003 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:24.003 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:24.003 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.003 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.003 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.003 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:24.003 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.003 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.003 [ 00:17:24.003 { 00:17:24.003 "name": "BaseBdev1", 00:17:24.003 "aliases": [ 00:17:24.003 "ad1fb803-ed84-41d6-afb4-e45266da7254" 00:17:24.003 ], 00:17:24.003 "product_name": "Malloc disk", 00:17:24.003 "block_size": 4096, 00:17:24.003 "num_blocks": 8192, 00:17:24.003 "uuid": "ad1fb803-ed84-41d6-afb4-e45266da7254", 00:17:24.003 "assigned_rate_limits": { 00:17:24.003 "rw_ios_per_sec": 0, 00:17:24.003 "rw_mbytes_per_sec": 0, 00:17:24.003 "r_mbytes_per_sec": 0, 00:17:24.003 "w_mbytes_per_sec": 0 00:17:24.003 }, 00:17:24.003 "claimed": true, 00:17:24.003 "claim_type": "exclusive_write", 00:17:24.003 "zoned": false, 00:17:24.003 "supported_io_types": { 00:17:24.003 "read": true, 00:17:24.003 "write": true, 00:17:24.003 "unmap": true, 00:17:24.003 "flush": true, 00:17:24.003 "reset": true, 00:17:24.003 "nvme_admin": false, 00:17:24.003 "nvme_io": false, 00:17:24.003 "nvme_io_md": false, 00:17:24.003 "write_zeroes": true, 00:17:24.004 "zcopy": true, 00:17:24.004 "get_zone_info": false, 00:17:24.004 "zone_management": false, 00:17:24.004 "zone_append": false, 00:17:24.004 "compare": false, 00:17:24.004 "compare_and_write": false, 00:17:24.004 "abort": true, 00:17:24.004 "seek_hole": false, 00:17:24.004 "seek_data": false, 00:17:24.004 "copy": true, 00:17:24.004 "nvme_iov_md": false 00:17:24.004 }, 00:17:24.004 "memory_domains": [ 00:17:24.004 { 00:17:24.004 "dma_device_id": "system", 00:17:24.004 "dma_device_type": 1 00:17:24.004 }, 00:17:24.004 { 00:17:24.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.004 "dma_device_type": 2 00:17:24.004 } 00:17:24.004 ], 00:17:24.004 "driver_specific": {} 00:17:24.004 } 00:17:24.004 ] 00:17:24.004 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.004 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:17:24.004 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:24.004 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.004 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.004 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.004 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.004 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.004 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.004 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.004 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.004 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.004 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.004 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.004 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.004 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.004 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.004 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.004 "name": "Existed_Raid", 00:17:24.004 "uuid": "be863037-c3b1-4a99-8aad-6159addd20c4", 00:17:24.004 "strip_size_kb": 0, 00:17:24.004 "state": "configuring", 00:17:24.004 "raid_level": "raid1", 00:17:24.004 "superblock": true, 00:17:24.004 "num_base_bdevs": 2, 00:17:24.004 "num_base_bdevs_discovered": 1, 00:17:24.004 "num_base_bdevs_operational": 2, 00:17:24.004 "base_bdevs_list": [ 00:17:24.004 { 00:17:24.004 "name": "BaseBdev1", 00:17:24.004 "uuid": "ad1fb803-ed84-41d6-afb4-e45266da7254", 00:17:24.004 "is_configured": true, 00:17:24.004 "data_offset": 256, 00:17:24.004 "data_size": 7936 00:17:24.004 }, 00:17:24.004 { 00:17:24.004 "name": "BaseBdev2", 00:17:24.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.004 "is_configured": false, 00:17:24.004 "data_offset": 0, 00:17:24.004 "data_size": 0 00:17:24.004 } 00:17:24.004 ] 00:17:24.004 }' 00:17:24.004 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.004 11:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.574 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:24.574 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.574 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.574 [2024-11-15 11:02:31.245118] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:24.574 [2024-11-15 11:02:31.245175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:24.574 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.574 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:24.574 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.574 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.574 [2024-11-15 11:02:31.257124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.574 [2024-11-15 11:02:31.259161] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:24.574 [2024-11-15 11:02:31.259257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:24.574 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.574 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:24.574 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:24.574 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:24.574 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.574 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.574 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.574 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.574 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.574 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.574 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.574 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.574 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.574 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.574 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.574 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.574 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.574 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.574 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.574 "name": "Existed_Raid", 00:17:24.574 "uuid": "c86f7545-6e83-4270-8f3d-112805728212", 00:17:24.574 "strip_size_kb": 0, 00:17:24.574 "state": "configuring", 00:17:24.574 "raid_level": "raid1", 00:17:24.574 "superblock": true, 00:17:24.574 "num_base_bdevs": 2, 00:17:24.574 "num_base_bdevs_discovered": 1, 00:17:24.574 "num_base_bdevs_operational": 2, 00:17:24.574 "base_bdevs_list": [ 00:17:24.574 { 00:17:24.574 "name": "BaseBdev1", 00:17:24.574 "uuid": "ad1fb803-ed84-41d6-afb4-e45266da7254", 00:17:24.574 "is_configured": true, 00:17:24.574 "data_offset": 256, 00:17:24.574 "data_size": 7936 00:17:24.574 }, 00:17:24.574 { 00:17:24.574 "name": "BaseBdev2", 00:17:24.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.574 "is_configured": false, 00:17:24.574 "data_offset": 0, 00:17:24.574 "data_size": 0 00:17:24.574 } 00:17:24.574 ] 00:17:24.574 }' 00:17:24.574 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.574 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.834 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:24.834 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.835 [2024-11-15 11:02:31.709569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:24.835 [2024-11-15 11:02:31.709930] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:24.835 [2024-11-15 11:02:31.709983] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:24.835 [2024-11-15 11:02:31.710267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:24.835 BaseBdev2 00:17:24.835 [2024-11-15 11:02:31.710506] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:24.835 [2024-11-15 11:02:31.710527] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:24.835 [2024-11-15 11:02:31.710689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.835 [ 00:17:24.835 { 00:17:24.835 "name": "BaseBdev2", 00:17:24.835 "aliases": [ 00:17:24.835 "2e833ef7-10f5-4d6d-860e-96f4960e6da6" 00:17:24.835 ], 00:17:24.835 "product_name": "Malloc disk", 00:17:24.835 "block_size": 4096, 00:17:24.835 "num_blocks": 8192, 00:17:24.835 "uuid": "2e833ef7-10f5-4d6d-860e-96f4960e6da6", 00:17:24.835 "assigned_rate_limits": { 00:17:24.835 "rw_ios_per_sec": 0, 00:17:24.835 "rw_mbytes_per_sec": 0, 00:17:24.835 "r_mbytes_per_sec": 0, 00:17:24.835 "w_mbytes_per_sec": 0 00:17:24.835 }, 00:17:24.835 "claimed": true, 00:17:24.835 "claim_type": "exclusive_write", 00:17:24.835 "zoned": false, 00:17:24.835 "supported_io_types": { 00:17:24.835 "read": true, 00:17:24.835 "write": true, 00:17:24.835 "unmap": true, 00:17:24.835 "flush": true, 00:17:24.835 "reset": true, 00:17:24.835 "nvme_admin": false, 00:17:24.835 "nvme_io": false, 00:17:24.835 "nvme_io_md": false, 00:17:24.835 "write_zeroes": true, 00:17:24.835 "zcopy": true, 00:17:24.835 "get_zone_info": false, 00:17:24.835 "zone_management": false, 00:17:24.835 "zone_append": false, 00:17:24.835 "compare": false, 00:17:24.835 "compare_and_write": false, 00:17:24.835 "abort": true, 00:17:24.835 "seek_hole": false, 00:17:24.835 "seek_data": false, 00:17:24.835 "copy": true, 00:17:24.835 "nvme_iov_md": false 00:17:24.835 }, 00:17:24.835 "memory_domains": [ 00:17:24.835 { 00:17:24.835 "dma_device_id": "system", 00:17:24.835 "dma_device_type": 1 00:17:24.835 }, 00:17:24.835 { 00:17:24.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.835 "dma_device_type": 2 00:17:24.835 } 00:17:24.835 ], 00:17:24.835 "driver_specific": {} 00:17:24.835 } 00:17:24.835 ] 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.835 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.094 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.094 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.094 "name": "Existed_Raid", 00:17:25.094 "uuid": "c86f7545-6e83-4270-8f3d-112805728212", 00:17:25.094 "strip_size_kb": 0, 00:17:25.094 "state": "online", 00:17:25.094 "raid_level": "raid1", 00:17:25.094 "superblock": true, 00:17:25.094 "num_base_bdevs": 2, 00:17:25.094 "num_base_bdevs_discovered": 2, 00:17:25.094 "num_base_bdevs_operational": 2, 00:17:25.094 "base_bdevs_list": [ 00:17:25.094 { 00:17:25.094 "name": "BaseBdev1", 00:17:25.094 "uuid": "ad1fb803-ed84-41d6-afb4-e45266da7254", 00:17:25.094 "is_configured": true, 00:17:25.094 "data_offset": 256, 00:17:25.094 "data_size": 7936 00:17:25.094 }, 00:17:25.094 { 00:17:25.094 "name": "BaseBdev2", 00:17:25.094 "uuid": "2e833ef7-10f5-4d6d-860e-96f4960e6da6", 00:17:25.094 "is_configured": true, 00:17:25.094 "data_offset": 256, 00:17:25.094 "data_size": 7936 00:17:25.095 } 00:17:25.095 ] 00:17:25.095 }' 00:17:25.095 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.095 11:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.354 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:25.354 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:25.354 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:25.354 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:25.354 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:25.354 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:25.354 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:25.354 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:25.354 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.354 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.354 [2024-11-15 11:02:32.237046] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:25.354 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.354 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:25.354 "name": "Existed_Raid", 00:17:25.354 "aliases": [ 00:17:25.354 "c86f7545-6e83-4270-8f3d-112805728212" 00:17:25.354 ], 00:17:25.354 "product_name": "Raid Volume", 00:17:25.354 "block_size": 4096, 00:17:25.354 "num_blocks": 7936, 00:17:25.354 "uuid": "c86f7545-6e83-4270-8f3d-112805728212", 00:17:25.354 "assigned_rate_limits": { 00:17:25.354 "rw_ios_per_sec": 0, 00:17:25.354 "rw_mbytes_per_sec": 0, 00:17:25.354 "r_mbytes_per_sec": 0, 00:17:25.354 "w_mbytes_per_sec": 0 00:17:25.354 }, 00:17:25.354 "claimed": false, 00:17:25.354 "zoned": false, 00:17:25.354 "supported_io_types": { 00:17:25.354 "read": true, 00:17:25.354 "write": true, 00:17:25.354 "unmap": false, 00:17:25.354 "flush": false, 00:17:25.354 "reset": true, 00:17:25.354 "nvme_admin": false, 00:17:25.354 "nvme_io": false, 00:17:25.354 "nvme_io_md": false, 00:17:25.354 "write_zeroes": true, 00:17:25.354 "zcopy": false, 00:17:25.354 "get_zone_info": false, 00:17:25.354 "zone_management": false, 00:17:25.354 "zone_append": false, 00:17:25.354 "compare": false, 00:17:25.354 "compare_and_write": false, 00:17:25.354 "abort": false, 00:17:25.354 "seek_hole": false, 00:17:25.354 "seek_data": false, 00:17:25.354 "copy": false, 00:17:25.354 "nvme_iov_md": false 00:17:25.354 }, 00:17:25.354 "memory_domains": [ 00:17:25.354 { 00:17:25.354 "dma_device_id": "system", 00:17:25.354 "dma_device_type": 1 00:17:25.354 }, 00:17:25.354 { 00:17:25.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.354 "dma_device_type": 2 00:17:25.354 }, 00:17:25.354 { 00:17:25.354 "dma_device_id": "system", 00:17:25.354 "dma_device_type": 1 00:17:25.354 }, 00:17:25.354 { 00:17:25.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.354 "dma_device_type": 2 00:17:25.354 } 00:17:25.354 ], 00:17:25.354 "driver_specific": { 00:17:25.354 "raid": { 00:17:25.354 "uuid": "c86f7545-6e83-4270-8f3d-112805728212", 00:17:25.354 "strip_size_kb": 0, 00:17:25.354 "state": "online", 00:17:25.354 "raid_level": "raid1", 00:17:25.354 "superblock": true, 00:17:25.354 "num_base_bdevs": 2, 00:17:25.354 "num_base_bdevs_discovered": 2, 00:17:25.354 "num_base_bdevs_operational": 2, 00:17:25.354 "base_bdevs_list": [ 00:17:25.354 { 00:17:25.354 "name": "BaseBdev1", 00:17:25.354 "uuid": "ad1fb803-ed84-41d6-afb4-e45266da7254", 00:17:25.354 "is_configured": true, 00:17:25.354 "data_offset": 256, 00:17:25.354 "data_size": 7936 00:17:25.354 }, 00:17:25.354 { 00:17:25.354 "name": "BaseBdev2", 00:17:25.354 "uuid": "2e833ef7-10f5-4d6d-860e-96f4960e6da6", 00:17:25.354 "is_configured": true, 00:17:25.354 "data_offset": 256, 00:17:25.354 "data_size": 7936 00:17:25.354 } 00:17:25.354 ] 00:17:25.354 } 00:17:25.354 } 00:17:25.354 }' 00:17:25.354 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:25.614 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:25.614 BaseBdev2' 00:17:25.614 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:25.614 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:25.614 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:25.614 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:25.614 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.615 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.615 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:25.615 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.615 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:25.615 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:25.615 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:25.615 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:25.615 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:25.615 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.615 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.615 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.615 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:25.615 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:25.615 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:25.615 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.615 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.615 [2024-11-15 11:02:32.476471] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:25.874 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.874 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:25.874 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:25.874 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:25.874 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:25.874 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:25.874 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:25.874 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:25.874 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.874 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.874 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.874 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:25.874 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.874 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.874 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.874 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.874 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.874 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.874 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.874 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.874 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.874 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.874 "name": "Existed_Raid", 00:17:25.874 "uuid": "c86f7545-6e83-4270-8f3d-112805728212", 00:17:25.874 "strip_size_kb": 0, 00:17:25.874 "state": "online", 00:17:25.874 "raid_level": "raid1", 00:17:25.874 "superblock": true, 00:17:25.874 "num_base_bdevs": 2, 00:17:25.874 "num_base_bdevs_discovered": 1, 00:17:25.874 "num_base_bdevs_operational": 1, 00:17:25.874 "base_bdevs_list": [ 00:17:25.874 { 00:17:25.874 "name": null, 00:17:25.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.874 "is_configured": false, 00:17:25.874 "data_offset": 0, 00:17:25.874 "data_size": 7936 00:17:25.874 }, 00:17:25.874 { 00:17:25.874 "name": "BaseBdev2", 00:17:25.874 "uuid": "2e833ef7-10f5-4d6d-860e-96f4960e6da6", 00:17:25.874 "is_configured": true, 00:17:25.874 "data_offset": 256, 00:17:25.874 "data_size": 7936 00:17:25.874 } 00:17:25.875 ] 00:17:25.875 }' 00:17:25.875 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.875 11:02:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.134 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:26.134 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:26.134 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.134 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.134 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.134 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:26.134 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.396 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:26.396 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:26.396 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:26.396 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.396 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.396 [2024-11-15 11:02:33.081784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:26.396 [2024-11-15 11:02:33.081898] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:26.396 [2024-11-15 11:02:33.179549] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:26.396 [2024-11-15 11:02:33.179688] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:26.396 [2024-11-15 11:02:33.179730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:26.397 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.397 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:26.397 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:26.397 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.397 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:26.397 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.397 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.397 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.397 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:26.397 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:26.397 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:26.397 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86113 00:17:26.397 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 86113 ']' 00:17:26.397 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 86113 00:17:26.397 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:17:26.397 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:26.397 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86113 00:17:26.397 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:26.397 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:26.397 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86113' 00:17:26.397 killing process with pid 86113 00:17:26.397 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@971 -- # kill 86113 00:17:26.397 [2024-11-15 11:02:33.279673] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:26.397 11:02:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@976 -- # wait 86113 00:17:26.397 [2024-11-15 11:02:33.297108] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:27.774 11:02:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:27.774 00:17:27.774 real 0m5.094s 00:17:27.774 user 0m7.372s 00:17:27.774 sys 0m0.850s 00:17:27.774 11:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:27.774 11:02:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.774 ************************************ 00:17:27.774 END TEST raid_state_function_test_sb_4k 00:17:27.774 ************************************ 00:17:27.774 11:02:34 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:27.774 11:02:34 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:27.774 11:02:34 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:27.774 11:02:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:27.774 ************************************ 00:17:27.774 START TEST raid_superblock_test_4k 00:17:27.774 ************************************ 00:17:27.774 11:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:17:27.774 11:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:27.774 11:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:27.774 11:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:27.774 11:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:27.775 11:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:27.775 11:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:27.775 11:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:27.775 11:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:27.775 11:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:27.775 11:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:27.775 11:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:27.775 11:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:27.775 11:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:27.775 11:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:27.775 11:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:27.775 11:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86365 00:17:27.775 11:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:27.775 11:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86365 00:17:27.775 11:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # '[' -z 86365 ']' 00:17:27.775 11:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.775 11:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:27.775 11:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.775 11:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:27.775 11:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.775 [2024-11-15 11:02:34.549044] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:17:27.775 [2024-11-15 11:02:34.549245] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86365 ] 00:17:28.032 [2024-11-15 11:02:34.736704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.032 [2024-11-15 11:02:34.858037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.291 [2024-11-15 11:02:35.062159] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:28.291 [2024-11-15 11:02:35.062279] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:28.551 11:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:28.551 11:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@866 -- # return 0 00:17:28.551 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:28.551 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:28.551 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:28.551 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:28.551 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:28.551 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:28.551 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:28.551 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:28.551 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:17:28.551 11:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.551 11:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.551 malloc1 00:17:28.551 11:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.551 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:28.551 11:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.551 11:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.551 [2024-11-15 11:02:35.457579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:28.551 [2024-11-15 11:02:35.457692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.551 [2024-11-15 11:02:35.457736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:28.551 [2024-11-15 11:02:35.457766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.551 [2024-11-15 11:02:35.459868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.551 [2024-11-15 11:02:35.459937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:28.551 pt1 00:17:28.551 11:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.551 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:28.551 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:28.551 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:28.551 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:28.551 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:28.551 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:28.551 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:28.551 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:28.551 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:17:28.551 11:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.551 11:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.812 malloc2 00:17:28.812 11:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.812 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:28.812 11:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.812 11:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.812 [2024-11-15 11:02:35.516550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:28.812 [2024-11-15 11:02:35.516648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.812 [2024-11-15 11:02:35.516685] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:28.812 [2024-11-15 11:02:35.516714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.812 [2024-11-15 11:02:35.518825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.812 [2024-11-15 11:02:35.518891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:28.812 pt2 00:17:28.812 11:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.812 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:28.812 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:28.812 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:28.812 11:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.812 11:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.812 [2024-11-15 11:02:35.528588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:28.812 [2024-11-15 11:02:35.530386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:28.812 [2024-11-15 11:02:35.530557] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:28.812 [2024-11-15 11:02:35.530574] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:28.812 [2024-11-15 11:02:35.530805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:28.812 [2024-11-15 11:02:35.530960] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:28.812 [2024-11-15 11:02:35.530974] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:28.812 [2024-11-15 11:02:35.531132] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.812 11:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.812 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:28.812 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.812 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.812 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.812 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.812 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:28.812 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.812 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.812 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.812 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.812 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.812 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.812 11:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.812 11:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.812 11:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.812 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.812 "name": "raid_bdev1", 00:17:28.812 "uuid": "62a0a911-5294-4c4c-b2a4-ca40eeb44df6", 00:17:28.812 "strip_size_kb": 0, 00:17:28.812 "state": "online", 00:17:28.812 "raid_level": "raid1", 00:17:28.812 "superblock": true, 00:17:28.812 "num_base_bdevs": 2, 00:17:28.812 "num_base_bdevs_discovered": 2, 00:17:28.812 "num_base_bdevs_operational": 2, 00:17:28.812 "base_bdevs_list": [ 00:17:28.812 { 00:17:28.812 "name": "pt1", 00:17:28.812 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:28.812 "is_configured": true, 00:17:28.812 "data_offset": 256, 00:17:28.812 "data_size": 7936 00:17:28.812 }, 00:17:28.812 { 00:17:28.812 "name": "pt2", 00:17:28.812 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:28.812 "is_configured": true, 00:17:28.812 "data_offset": 256, 00:17:28.812 "data_size": 7936 00:17:28.812 } 00:17:28.812 ] 00:17:28.812 }' 00:17:28.812 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.812 11:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.072 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:29.072 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:29.072 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:29.072 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:29.072 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:29.072 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:29.072 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:29.072 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:29.072 11:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.072 11:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.072 [2024-11-15 11:02:35.920267] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:29.072 11:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.072 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:29.072 "name": "raid_bdev1", 00:17:29.072 "aliases": [ 00:17:29.072 "62a0a911-5294-4c4c-b2a4-ca40eeb44df6" 00:17:29.072 ], 00:17:29.072 "product_name": "Raid Volume", 00:17:29.072 "block_size": 4096, 00:17:29.072 "num_blocks": 7936, 00:17:29.072 "uuid": "62a0a911-5294-4c4c-b2a4-ca40eeb44df6", 00:17:29.072 "assigned_rate_limits": { 00:17:29.072 "rw_ios_per_sec": 0, 00:17:29.072 "rw_mbytes_per_sec": 0, 00:17:29.072 "r_mbytes_per_sec": 0, 00:17:29.072 "w_mbytes_per_sec": 0 00:17:29.072 }, 00:17:29.072 "claimed": false, 00:17:29.072 "zoned": false, 00:17:29.072 "supported_io_types": { 00:17:29.072 "read": true, 00:17:29.072 "write": true, 00:17:29.072 "unmap": false, 00:17:29.072 "flush": false, 00:17:29.072 "reset": true, 00:17:29.072 "nvme_admin": false, 00:17:29.072 "nvme_io": false, 00:17:29.072 "nvme_io_md": false, 00:17:29.072 "write_zeroes": true, 00:17:29.072 "zcopy": false, 00:17:29.072 "get_zone_info": false, 00:17:29.072 "zone_management": false, 00:17:29.072 "zone_append": false, 00:17:29.072 "compare": false, 00:17:29.072 "compare_and_write": false, 00:17:29.072 "abort": false, 00:17:29.072 "seek_hole": false, 00:17:29.072 "seek_data": false, 00:17:29.072 "copy": false, 00:17:29.072 "nvme_iov_md": false 00:17:29.072 }, 00:17:29.072 "memory_domains": [ 00:17:29.072 { 00:17:29.072 "dma_device_id": "system", 00:17:29.072 "dma_device_type": 1 00:17:29.072 }, 00:17:29.072 { 00:17:29.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.072 "dma_device_type": 2 00:17:29.072 }, 00:17:29.072 { 00:17:29.072 "dma_device_id": "system", 00:17:29.072 "dma_device_type": 1 00:17:29.072 }, 00:17:29.072 { 00:17:29.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.072 "dma_device_type": 2 00:17:29.072 } 00:17:29.072 ], 00:17:29.072 "driver_specific": { 00:17:29.072 "raid": { 00:17:29.072 "uuid": "62a0a911-5294-4c4c-b2a4-ca40eeb44df6", 00:17:29.072 "strip_size_kb": 0, 00:17:29.072 "state": "online", 00:17:29.072 "raid_level": "raid1", 00:17:29.072 "superblock": true, 00:17:29.072 "num_base_bdevs": 2, 00:17:29.072 "num_base_bdevs_discovered": 2, 00:17:29.072 "num_base_bdevs_operational": 2, 00:17:29.072 "base_bdevs_list": [ 00:17:29.072 { 00:17:29.072 "name": "pt1", 00:17:29.072 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:29.072 "is_configured": true, 00:17:29.072 "data_offset": 256, 00:17:29.072 "data_size": 7936 00:17:29.072 }, 00:17:29.072 { 00:17:29.072 "name": "pt2", 00:17:29.072 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:29.072 "is_configured": true, 00:17:29.072 "data_offset": 256, 00:17:29.072 "data_size": 7936 00:17:29.072 } 00:17:29.072 ] 00:17:29.072 } 00:17:29.072 } 00:17:29.072 }' 00:17:29.072 11:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:29.332 pt2' 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.332 [2024-11-15 11:02:36.171810] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=62a0a911-5294-4c4c-b2a4-ca40eeb44df6 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 62a0a911-5294-4c4c-b2a4-ca40eeb44df6 ']' 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.332 [2024-11-15 11:02:36.215405] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:29.332 [2024-11-15 11:02:36.215430] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:29.332 [2024-11-15 11:02:36.215511] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:29.332 [2024-11-15 11:02:36.215569] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:29.332 [2024-11-15 11:02:36.215581] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.332 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.592 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:29.592 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:29.592 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:29.592 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:29.592 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.592 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.592 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.592 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:29.592 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:29.592 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.592 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.592 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.592 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:29.592 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.592 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.592 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:29.592 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.592 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:29.592 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:29.592 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:17:29.592 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:29.592 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.593 [2024-11-15 11:02:36.351226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:29.593 [2024-11-15 11:02:36.353290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:29.593 [2024-11-15 11:02:36.353384] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:29.593 [2024-11-15 11:02:36.353446] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:29.593 [2024-11-15 11:02:36.353462] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:29.593 [2024-11-15 11:02:36.353474] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:29.593 request: 00:17:29.593 { 00:17:29.593 "name": "raid_bdev1", 00:17:29.593 "raid_level": "raid1", 00:17:29.593 "base_bdevs": [ 00:17:29.593 "malloc1", 00:17:29.593 "malloc2" 00:17:29.593 ], 00:17:29.593 "superblock": false, 00:17:29.593 "method": "bdev_raid_create", 00:17:29.593 "req_id": 1 00:17:29.593 } 00:17:29.593 Got JSON-RPC error response 00:17:29.593 response: 00:17:29.593 { 00:17:29.593 "code": -17, 00:17:29.593 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:29.593 } 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.593 [2024-11-15 11:02:36.407085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:29.593 [2024-11-15 11:02:36.407202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.593 [2024-11-15 11:02:36.407260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:29.593 [2024-11-15 11:02:36.407318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.593 [2024-11-15 11:02:36.409783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.593 [2024-11-15 11:02:36.409872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:29.593 [2024-11-15 11:02:36.409992] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:29.593 [2024-11-15 11:02:36.410103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:29.593 pt1 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.593 "name": "raid_bdev1", 00:17:29.593 "uuid": "62a0a911-5294-4c4c-b2a4-ca40eeb44df6", 00:17:29.593 "strip_size_kb": 0, 00:17:29.593 "state": "configuring", 00:17:29.593 "raid_level": "raid1", 00:17:29.593 "superblock": true, 00:17:29.593 "num_base_bdevs": 2, 00:17:29.593 "num_base_bdevs_discovered": 1, 00:17:29.593 "num_base_bdevs_operational": 2, 00:17:29.593 "base_bdevs_list": [ 00:17:29.593 { 00:17:29.593 "name": "pt1", 00:17:29.593 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:29.593 "is_configured": true, 00:17:29.593 "data_offset": 256, 00:17:29.593 "data_size": 7936 00:17:29.593 }, 00:17:29.593 { 00:17:29.593 "name": null, 00:17:29.593 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:29.593 "is_configured": false, 00:17:29.593 "data_offset": 256, 00:17:29.593 "data_size": 7936 00:17:29.593 } 00:17:29.593 ] 00:17:29.593 }' 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.593 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.163 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:30.163 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:30.163 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:30.163 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:30.163 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.163 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.163 [2024-11-15 11:02:36.830396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:30.163 [2024-11-15 11:02:36.830466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.163 [2024-11-15 11:02:36.830487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:30.163 [2024-11-15 11:02:36.830498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.163 [2024-11-15 11:02:36.830944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.163 [2024-11-15 11:02:36.830963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:30.163 [2024-11-15 11:02:36.831042] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:30.163 [2024-11-15 11:02:36.831067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:30.163 [2024-11-15 11:02:36.831194] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:30.163 [2024-11-15 11:02:36.831205] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:30.163 [2024-11-15 11:02:36.831451] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:30.163 [2024-11-15 11:02:36.831624] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:30.163 [2024-11-15 11:02:36.831640] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:30.163 [2024-11-15 11:02:36.831785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.163 pt2 00:17:30.163 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.163 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:30.163 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:30.163 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:30.163 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.163 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.163 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.163 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.163 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:30.163 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.163 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.163 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.163 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.163 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.163 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.163 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.163 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.163 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.163 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.163 "name": "raid_bdev1", 00:17:30.163 "uuid": "62a0a911-5294-4c4c-b2a4-ca40eeb44df6", 00:17:30.163 "strip_size_kb": 0, 00:17:30.163 "state": "online", 00:17:30.163 "raid_level": "raid1", 00:17:30.163 "superblock": true, 00:17:30.163 "num_base_bdevs": 2, 00:17:30.163 "num_base_bdevs_discovered": 2, 00:17:30.163 "num_base_bdevs_operational": 2, 00:17:30.163 "base_bdevs_list": [ 00:17:30.163 { 00:17:30.163 "name": "pt1", 00:17:30.163 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:30.163 "is_configured": true, 00:17:30.163 "data_offset": 256, 00:17:30.163 "data_size": 7936 00:17:30.163 }, 00:17:30.163 { 00:17:30.163 "name": "pt2", 00:17:30.163 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:30.163 "is_configured": true, 00:17:30.163 "data_offset": 256, 00:17:30.163 "data_size": 7936 00:17:30.163 } 00:17:30.163 ] 00:17:30.163 }' 00:17:30.163 11:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.163 11:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.423 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:30.423 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:30.423 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:30.423 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:30.423 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:30.423 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:30.423 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:30.423 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:30.423 11:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.423 11:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.423 [2024-11-15 11:02:37.281875] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:30.423 11:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.423 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:30.423 "name": "raid_bdev1", 00:17:30.423 "aliases": [ 00:17:30.423 "62a0a911-5294-4c4c-b2a4-ca40eeb44df6" 00:17:30.423 ], 00:17:30.423 "product_name": "Raid Volume", 00:17:30.423 "block_size": 4096, 00:17:30.423 "num_blocks": 7936, 00:17:30.423 "uuid": "62a0a911-5294-4c4c-b2a4-ca40eeb44df6", 00:17:30.423 "assigned_rate_limits": { 00:17:30.423 "rw_ios_per_sec": 0, 00:17:30.423 "rw_mbytes_per_sec": 0, 00:17:30.423 "r_mbytes_per_sec": 0, 00:17:30.423 "w_mbytes_per_sec": 0 00:17:30.423 }, 00:17:30.423 "claimed": false, 00:17:30.423 "zoned": false, 00:17:30.423 "supported_io_types": { 00:17:30.423 "read": true, 00:17:30.423 "write": true, 00:17:30.423 "unmap": false, 00:17:30.424 "flush": false, 00:17:30.424 "reset": true, 00:17:30.424 "nvme_admin": false, 00:17:30.424 "nvme_io": false, 00:17:30.424 "nvme_io_md": false, 00:17:30.424 "write_zeroes": true, 00:17:30.424 "zcopy": false, 00:17:30.424 "get_zone_info": false, 00:17:30.424 "zone_management": false, 00:17:30.424 "zone_append": false, 00:17:30.424 "compare": false, 00:17:30.424 "compare_and_write": false, 00:17:30.424 "abort": false, 00:17:30.424 "seek_hole": false, 00:17:30.424 "seek_data": false, 00:17:30.424 "copy": false, 00:17:30.424 "nvme_iov_md": false 00:17:30.424 }, 00:17:30.424 "memory_domains": [ 00:17:30.424 { 00:17:30.424 "dma_device_id": "system", 00:17:30.424 "dma_device_type": 1 00:17:30.424 }, 00:17:30.424 { 00:17:30.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.424 "dma_device_type": 2 00:17:30.424 }, 00:17:30.424 { 00:17:30.424 "dma_device_id": "system", 00:17:30.424 "dma_device_type": 1 00:17:30.424 }, 00:17:30.424 { 00:17:30.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.424 "dma_device_type": 2 00:17:30.424 } 00:17:30.424 ], 00:17:30.424 "driver_specific": { 00:17:30.424 "raid": { 00:17:30.424 "uuid": "62a0a911-5294-4c4c-b2a4-ca40eeb44df6", 00:17:30.424 "strip_size_kb": 0, 00:17:30.424 "state": "online", 00:17:30.424 "raid_level": "raid1", 00:17:30.424 "superblock": true, 00:17:30.424 "num_base_bdevs": 2, 00:17:30.424 "num_base_bdevs_discovered": 2, 00:17:30.424 "num_base_bdevs_operational": 2, 00:17:30.424 "base_bdevs_list": [ 00:17:30.424 { 00:17:30.424 "name": "pt1", 00:17:30.424 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:30.424 "is_configured": true, 00:17:30.424 "data_offset": 256, 00:17:30.424 "data_size": 7936 00:17:30.424 }, 00:17:30.424 { 00:17:30.424 "name": "pt2", 00:17:30.424 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:30.424 "is_configured": true, 00:17:30.424 "data_offset": 256, 00:17:30.424 "data_size": 7936 00:17:30.424 } 00:17:30.424 ] 00:17:30.424 } 00:17:30.424 } 00:17:30.424 }' 00:17:30.424 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:30.684 pt2' 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.684 [2024-11-15 11:02:37.517512] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 62a0a911-5294-4c4c-b2a4-ca40eeb44df6 '!=' 62a0a911-5294-4c4c-b2a4-ca40eeb44df6 ']' 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.684 [2024-11-15 11:02:37.561179] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.684 11:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.944 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.944 "name": "raid_bdev1", 00:17:30.944 "uuid": "62a0a911-5294-4c4c-b2a4-ca40eeb44df6", 00:17:30.944 "strip_size_kb": 0, 00:17:30.944 "state": "online", 00:17:30.944 "raid_level": "raid1", 00:17:30.944 "superblock": true, 00:17:30.944 "num_base_bdevs": 2, 00:17:30.944 "num_base_bdevs_discovered": 1, 00:17:30.944 "num_base_bdevs_operational": 1, 00:17:30.944 "base_bdevs_list": [ 00:17:30.944 { 00:17:30.944 "name": null, 00:17:30.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.944 "is_configured": false, 00:17:30.944 "data_offset": 0, 00:17:30.944 "data_size": 7936 00:17:30.944 }, 00:17:30.944 { 00:17:30.944 "name": "pt2", 00:17:30.944 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:30.944 "is_configured": true, 00:17:30.944 "data_offset": 256, 00:17:30.944 "data_size": 7936 00:17:30.944 } 00:17:30.944 ] 00:17:30.944 }' 00:17:30.944 11:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.944 11:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.202 [2024-11-15 11:02:38.032349] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:31.202 [2024-11-15 11:02:38.032431] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:31.202 [2024-11-15 11:02:38.032546] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:31.202 [2024-11-15 11:02:38.032617] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:31.202 [2024-11-15 11:02:38.032670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.202 [2024-11-15 11:02:38.092210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:31.202 [2024-11-15 11:02:38.092270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.202 [2024-11-15 11:02:38.092287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:31.202 [2024-11-15 11:02:38.092297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.202 [2024-11-15 11:02:38.094487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.202 [2024-11-15 11:02:38.094562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:31.202 [2024-11-15 11:02:38.094644] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:31.202 [2024-11-15 11:02:38.094692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:31.202 [2024-11-15 11:02:38.094806] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:31.202 [2024-11-15 11:02:38.094818] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:31.202 [2024-11-15 11:02:38.095029] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:31.202 [2024-11-15 11:02:38.095177] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:31.202 [2024-11-15 11:02:38.095185] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:31.202 [2024-11-15 11:02:38.095336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.202 pt2 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.202 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.461 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.461 "name": "raid_bdev1", 00:17:31.461 "uuid": "62a0a911-5294-4c4c-b2a4-ca40eeb44df6", 00:17:31.461 "strip_size_kb": 0, 00:17:31.461 "state": "online", 00:17:31.461 "raid_level": "raid1", 00:17:31.461 "superblock": true, 00:17:31.461 "num_base_bdevs": 2, 00:17:31.461 "num_base_bdevs_discovered": 1, 00:17:31.461 "num_base_bdevs_operational": 1, 00:17:31.461 "base_bdevs_list": [ 00:17:31.461 { 00:17:31.461 "name": null, 00:17:31.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.461 "is_configured": false, 00:17:31.461 "data_offset": 256, 00:17:31.461 "data_size": 7936 00:17:31.461 }, 00:17:31.461 { 00:17:31.461 "name": "pt2", 00:17:31.461 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:31.461 "is_configured": true, 00:17:31.461 "data_offset": 256, 00:17:31.461 "data_size": 7936 00:17:31.461 } 00:17:31.461 ] 00:17:31.461 }' 00:17:31.461 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.461 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.720 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:31.720 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.720 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.720 [2024-11-15 11:02:38.507479] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:31.720 [2024-11-15 11:02:38.507563] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:31.720 [2024-11-15 11:02:38.507659] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:31.720 [2024-11-15 11:02:38.507724] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:31.720 [2024-11-15 11:02:38.507773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:31.720 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.720 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.720 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:31.720 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.720 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.720 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.720 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:31.720 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:31.720 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:31.720 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:31.720 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.720 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.720 [2024-11-15 11:02:38.567395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:31.720 [2024-11-15 11:02:38.567505] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.720 [2024-11-15 11:02:38.567541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:31.720 [2024-11-15 11:02:38.567568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.720 [2024-11-15 11:02:38.569761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.720 [2024-11-15 11:02:38.569832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:31.720 [2024-11-15 11:02:38.569938] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:31.720 [2024-11-15 11:02:38.570015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:31.721 [2024-11-15 11:02:38.570191] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:31.721 [2024-11-15 11:02:38.570242] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:31.721 [2024-11-15 11:02:38.570284] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:31.721 [2024-11-15 11:02:38.570405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:31.721 [2024-11-15 11:02:38.570521] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:31.721 [2024-11-15 11:02:38.570558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:31.721 [2024-11-15 11:02:38.570822] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:31.721 [2024-11-15 11:02:38.571006] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:31.721 [2024-11-15 11:02:38.571051] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:31.721 [2024-11-15 11:02:38.571240] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.721 pt1 00:17:31.721 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.721 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:31.721 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:31.721 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.721 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.721 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.721 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.721 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:31.721 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.721 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.721 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.721 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.721 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.721 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.721 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.721 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.721 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.721 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.721 "name": "raid_bdev1", 00:17:31.721 "uuid": "62a0a911-5294-4c4c-b2a4-ca40eeb44df6", 00:17:31.721 "strip_size_kb": 0, 00:17:31.721 "state": "online", 00:17:31.721 "raid_level": "raid1", 00:17:31.721 "superblock": true, 00:17:31.721 "num_base_bdevs": 2, 00:17:31.721 "num_base_bdevs_discovered": 1, 00:17:31.721 "num_base_bdevs_operational": 1, 00:17:31.721 "base_bdevs_list": [ 00:17:31.721 { 00:17:31.721 "name": null, 00:17:31.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.721 "is_configured": false, 00:17:31.721 "data_offset": 256, 00:17:31.721 "data_size": 7936 00:17:31.721 }, 00:17:31.721 { 00:17:31.721 "name": "pt2", 00:17:31.721 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:31.721 "is_configured": true, 00:17:31.721 "data_offset": 256, 00:17:31.721 "data_size": 7936 00:17:31.721 } 00:17:31.721 ] 00:17:31.721 }' 00:17:31.721 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.721 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.289 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:32.289 11:02:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:32.289 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.289 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.289 11:02:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.289 11:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:32.289 11:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:32.289 11:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.289 11:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:32.289 11:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.289 [2024-11-15 11:02:39.022851] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:32.289 11:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.289 11:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 62a0a911-5294-4c4c-b2a4-ca40eeb44df6 '!=' 62a0a911-5294-4c4c-b2a4-ca40eeb44df6 ']' 00:17:32.289 11:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86365 00:17:32.289 11:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # '[' -z 86365 ']' 00:17:32.289 11:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # kill -0 86365 00:17:32.289 11:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # uname 00:17:32.289 11:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:32.289 11:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86365 00:17:32.289 killing process with pid 86365 00:17:32.289 11:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:32.289 11:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:32.289 11:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86365' 00:17:32.289 11:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@971 -- # kill 86365 00:17:32.289 [2024-11-15 11:02:39.099673] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:32.289 [2024-11-15 11:02:39.099761] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.289 [2024-11-15 11:02:39.099808] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:32.289 [2024-11-15 11:02:39.099822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:32.289 11:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@976 -- # wait 86365 00:17:32.548 [2024-11-15 11:02:39.302573] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:33.488 11:02:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:33.488 00:17:33.488 real 0m5.952s 00:17:33.488 user 0m8.998s 00:17:33.488 sys 0m1.075s 00:17:33.488 11:02:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:33.488 11:02:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.488 ************************************ 00:17:33.488 END TEST raid_superblock_test_4k 00:17:33.488 ************************************ 00:17:33.748 11:02:40 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:33.748 11:02:40 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:33.748 11:02:40 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:17:33.748 11:02:40 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:33.748 11:02:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:33.748 ************************************ 00:17:33.748 START TEST raid_rebuild_test_sb_4k 00:17:33.748 ************************************ 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86688 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86688 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 86688 ']' 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:33.748 11:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.748 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:33.748 Zero copy mechanism will not be used. 00:17:33.748 [2024-11-15 11:02:40.589075] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:17:33.749 [2024-11-15 11:02:40.589198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86688 ] 00:17:34.008 [2024-11-15 11:02:40.762740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.008 [2024-11-15 11:02:40.878956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.268 [2024-11-15 11:02:41.077651] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:34.268 [2024-11-15 11:02:41.077714] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:34.527 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:34.527 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:17:34.527 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:34.527 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:34.527 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.528 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.787 BaseBdev1_malloc 00:17:34.787 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.787 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:34.787 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.787 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.787 [2024-11-15 11:02:41.460113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:34.787 [2024-11-15 11:02:41.460258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.787 [2024-11-15 11:02:41.460292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:34.787 [2024-11-15 11:02:41.460325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.787 [2024-11-15 11:02:41.462720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.787 [2024-11-15 11:02:41.462771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:34.787 BaseBdev1 00:17:34.787 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.787 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.788 BaseBdev2_malloc 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.788 [2024-11-15 11:02:41.517841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:34.788 [2024-11-15 11:02:41.517904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.788 [2024-11-15 11:02:41.517924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:34.788 [2024-11-15 11:02:41.517937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.788 [2024-11-15 11:02:41.520034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.788 [2024-11-15 11:02:41.520074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:34.788 BaseBdev2 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.788 spare_malloc 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.788 spare_delay 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.788 [2024-11-15 11:02:41.604435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:34.788 [2024-11-15 11:02:41.604558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.788 [2024-11-15 11:02:41.604586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:34.788 [2024-11-15 11:02:41.604598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.788 [2024-11-15 11:02:41.606879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.788 [2024-11-15 11:02:41.606922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:34.788 spare 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.788 [2024-11-15 11:02:41.612479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:34.788 [2024-11-15 11:02:41.614245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:34.788 [2024-11-15 11:02:41.614428] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:34.788 [2024-11-15 11:02:41.614446] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:34.788 [2024-11-15 11:02:41.614702] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:34.788 [2024-11-15 11:02:41.614868] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:34.788 [2024-11-15 11:02:41.614876] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:34.788 [2024-11-15 11:02:41.615019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.788 "name": "raid_bdev1", 00:17:34.788 "uuid": "e9caf8c0-4468-422a-95e6-6555015abddb", 00:17:34.788 "strip_size_kb": 0, 00:17:34.788 "state": "online", 00:17:34.788 "raid_level": "raid1", 00:17:34.788 "superblock": true, 00:17:34.788 "num_base_bdevs": 2, 00:17:34.788 "num_base_bdevs_discovered": 2, 00:17:34.788 "num_base_bdevs_operational": 2, 00:17:34.788 "base_bdevs_list": [ 00:17:34.788 { 00:17:34.788 "name": "BaseBdev1", 00:17:34.788 "uuid": "6963f0fa-d5d8-5897-8db3-1d45ae50f06e", 00:17:34.788 "is_configured": true, 00:17:34.788 "data_offset": 256, 00:17:34.788 "data_size": 7936 00:17:34.788 }, 00:17:34.788 { 00:17:34.788 "name": "BaseBdev2", 00:17:34.788 "uuid": "b0416871-8e6a-57f7-a148-f4c9f5a8d0c9", 00:17:34.788 "is_configured": true, 00:17:34.788 "data_offset": 256, 00:17:34.788 "data_size": 7936 00:17:34.788 } 00:17:34.788 ] 00:17:34.788 }' 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.788 11:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.358 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:35.358 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.358 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.358 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:35.358 [2024-11-15 11:02:42.064010] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:35.358 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.358 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:35.358 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.358 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.358 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:35.358 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.358 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.358 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:35.358 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:35.358 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:35.358 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:35.358 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:35.358 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:35.358 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:35.358 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:35.358 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:35.358 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:35.358 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:35.358 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:35.358 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:35.358 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:35.617 [2024-11-15 11:02:42.335337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:35.617 /dev/nbd0 00:17:35.617 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:35.617 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:35.617 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:35.617 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:17:35.617 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:35.617 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:35.618 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:35.618 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:17:35.618 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:35.618 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:35.618 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:35.618 1+0 records in 00:17:35.618 1+0 records out 00:17:35.618 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409167 s, 10.0 MB/s 00:17:35.618 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:35.618 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:17:35.618 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:35.618 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:35.618 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:17:35.618 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:35.618 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:35.618 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:35.618 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:35.618 11:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:36.186 7936+0 records in 00:17:36.186 7936+0 records out 00:17:36.186 32505856 bytes (33 MB, 31 MiB) copied, 0.636097 s, 51.1 MB/s 00:17:36.186 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:36.186 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:36.186 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:36.186 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:36.186 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:36.186 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:36.187 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:36.447 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:36.447 [2024-11-15 11:02:43.258489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.447 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:36.447 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:36.447 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:36.447 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:36.447 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:36.447 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:36.447 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:36.447 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:36.447 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.447 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.447 [2024-11-15 11:02:43.274570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:36.448 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.448 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:36.448 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.448 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.448 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.448 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.448 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:36.448 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.448 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.448 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.448 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.448 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.448 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.448 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.448 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.448 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.448 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.448 "name": "raid_bdev1", 00:17:36.448 "uuid": "e9caf8c0-4468-422a-95e6-6555015abddb", 00:17:36.448 "strip_size_kb": 0, 00:17:36.448 "state": "online", 00:17:36.448 "raid_level": "raid1", 00:17:36.448 "superblock": true, 00:17:36.448 "num_base_bdevs": 2, 00:17:36.448 "num_base_bdevs_discovered": 1, 00:17:36.448 "num_base_bdevs_operational": 1, 00:17:36.448 "base_bdevs_list": [ 00:17:36.448 { 00:17:36.448 "name": null, 00:17:36.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.448 "is_configured": false, 00:17:36.448 "data_offset": 0, 00:17:36.448 "data_size": 7936 00:17:36.448 }, 00:17:36.448 { 00:17:36.448 "name": "BaseBdev2", 00:17:36.448 "uuid": "b0416871-8e6a-57f7-a148-f4c9f5a8d0c9", 00:17:36.448 "is_configured": true, 00:17:36.448 "data_offset": 256, 00:17:36.448 "data_size": 7936 00:17:36.448 } 00:17:36.448 ] 00:17:36.448 }' 00:17:36.448 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.448 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.016 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:37.016 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.016 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.016 [2024-11-15 11:02:43.749805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:37.016 [2024-11-15 11:02:43.768488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:37.016 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.016 11:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:37.016 [2024-11-15 11:02:43.770499] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:37.955 11:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.955 11:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.955 11:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.955 11:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.955 11:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.955 11:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.955 11:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.955 11:02:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.955 11:02:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.955 11:02:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.955 11:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.955 "name": "raid_bdev1", 00:17:37.955 "uuid": "e9caf8c0-4468-422a-95e6-6555015abddb", 00:17:37.955 "strip_size_kb": 0, 00:17:37.955 "state": "online", 00:17:37.955 "raid_level": "raid1", 00:17:37.955 "superblock": true, 00:17:37.955 "num_base_bdevs": 2, 00:17:37.955 "num_base_bdevs_discovered": 2, 00:17:37.955 "num_base_bdevs_operational": 2, 00:17:37.955 "process": { 00:17:37.955 "type": "rebuild", 00:17:37.955 "target": "spare", 00:17:37.955 "progress": { 00:17:37.955 "blocks": 2560, 00:17:37.955 "percent": 32 00:17:37.955 } 00:17:37.955 }, 00:17:37.955 "base_bdevs_list": [ 00:17:37.955 { 00:17:37.955 "name": "spare", 00:17:37.955 "uuid": "fa836986-1046-54fd-8af6-8c9c184ca4a9", 00:17:37.955 "is_configured": true, 00:17:37.955 "data_offset": 256, 00:17:37.955 "data_size": 7936 00:17:37.955 }, 00:17:37.955 { 00:17:37.955 "name": "BaseBdev2", 00:17:37.955 "uuid": "b0416871-8e6a-57f7-a148-f4c9f5a8d0c9", 00:17:37.955 "is_configured": true, 00:17:37.955 "data_offset": 256, 00:17:37.955 "data_size": 7936 00:17:37.955 } 00:17:37.955 ] 00:17:37.955 }' 00:17:37.955 11:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.955 11:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:38.219 11:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.219 11:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.219 11:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:38.219 11:02:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.219 11:02:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.219 [2024-11-15 11:02:44.937837] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:38.219 [2024-11-15 11:02:44.976200] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:38.219 [2024-11-15 11:02:44.976278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.219 [2024-11-15 11:02:44.976294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:38.219 [2024-11-15 11:02:44.976318] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:38.219 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.219 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:38.219 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.219 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.219 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.219 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.219 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:38.219 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.219 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.219 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.219 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.219 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.219 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.219 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.219 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.219 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.219 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.219 "name": "raid_bdev1", 00:17:38.219 "uuid": "e9caf8c0-4468-422a-95e6-6555015abddb", 00:17:38.219 "strip_size_kb": 0, 00:17:38.219 "state": "online", 00:17:38.219 "raid_level": "raid1", 00:17:38.219 "superblock": true, 00:17:38.219 "num_base_bdevs": 2, 00:17:38.219 "num_base_bdevs_discovered": 1, 00:17:38.219 "num_base_bdevs_operational": 1, 00:17:38.219 "base_bdevs_list": [ 00:17:38.219 { 00:17:38.219 "name": null, 00:17:38.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.219 "is_configured": false, 00:17:38.219 "data_offset": 0, 00:17:38.219 "data_size": 7936 00:17:38.219 }, 00:17:38.219 { 00:17:38.219 "name": "BaseBdev2", 00:17:38.219 "uuid": "b0416871-8e6a-57f7-a148-f4c9f5a8d0c9", 00:17:38.219 "is_configured": true, 00:17:38.219 "data_offset": 256, 00:17:38.219 "data_size": 7936 00:17:38.219 } 00:17:38.219 ] 00:17:38.219 }' 00:17:38.219 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.219 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.806 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:38.806 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.806 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:38.806 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:38.806 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.806 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.806 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.806 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.806 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.806 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.806 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.806 "name": "raid_bdev1", 00:17:38.806 "uuid": "e9caf8c0-4468-422a-95e6-6555015abddb", 00:17:38.806 "strip_size_kb": 0, 00:17:38.806 "state": "online", 00:17:38.806 "raid_level": "raid1", 00:17:38.806 "superblock": true, 00:17:38.806 "num_base_bdevs": 2, 00:17:38.806 "num_base_bdevs_discovered": 1, 00:17:38.806 "num_base_bdevs_operational": 1, 00:17:38.806 "base_bdevs_list": [ 00:17:38.806 { 00:17:38.806 "name": null, 00:17:38.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.806 "is_configured": false, 00:17:38.806 "data_offset": 0, 00:17:38.806 "data_size": 7936 00:17:38.806 }, 00:17:38.806 { 00:17:38.806 "name": "BaseBdev2", 00:17:38.806 "uuid": "b0416871-8e6a-57f7-a148-f4c9f5a8d0c9", 00:17:38.806 "is_configured": true, 00:17:38.806 "data_offset": 256, 00:17:38.806 "data_size": 7936 00:17:38.806 } 00:17:38.806 ] 00:17:38.806 }' 00:17:38.806 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.806 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:38.806 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.806 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:38.806 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:38.806 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.806 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.806 [2024-11-15 11:02:45.616735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:38.806 [2024-11-15 11:02:45.633533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:38.806 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.806 11:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:38.806 [2024-11-15 11:02:45.635488] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:39.747 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.747 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.747 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.747 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.747 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.747 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.747 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.747 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.747 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.747 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.008 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.008 "name": "raid_bdev1", 00:17:40.008 "uuid": "e9caf8c0-4468-422a-95e6-6555015abddb", 00:17:40.008 "strip_size_kb": 0, 00:17:40.008 "state": "online", 00:17:40.008 "raid_level": "raid1", 00:17:40.008 "superblock": true, 00:17:40.008 "num_base_bdevs": 2, 00:17:40.008 "num_base_bdevs_discovered": 2, 00:17:40.008 "num_base_bdevs_operational": 2, 00:17:40.008 "process": { 00:17:40.008 "type": "rebuild", 00:17:40.008 "target": "spare", 00:17:40.008 "progress": { 00:17:40.008 "blocks": 2560, 00:17:40.008 "percent": 32 00:17:40.008 } 00:17:40.008 }, 00:17:40.008 "base_bdevs_list": [ 00:17:40.008 { 00:17:40.008 "name": "spare", 00:17:40.008 "uuid": "fa836986-1046-54fd-8af6-8c9c184ca4a9", 00:17:40.008 "is_configured": true, 00:17:40.008 "data_offset": 256, 00:17:40.008 "data_size": 7936 00:17:40.008 }, 00:17:40.008 { 00:17:40.008 "name": "BaseBdev2", 00:17:40.008 "uuid": "b0416871-8e6a-57f7-a148-f4c9f5a8d0c9", 00:17:40.008 "is_configured": true, 00:17:40.008 "data_offset": 256, 00:17:40.008 "data_size": 7936 00:17:40.008 } 00:17:40.008 ] 00:17:40.008 }' 00:17:40.008 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.008 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:40.008 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.008 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:40.008 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:40.008 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:40.008 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:40.008 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:40.008 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:40.008 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:40.008 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=685 00:17:40.008 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:40.008 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:40.008 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.008 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:40.008 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:40.008 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.008 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.008 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.008 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.008 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.008 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.008 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.008 "name": "raid_bdev1", 00:17:40.008 "uuid": "e9caf8c0-4468-422a-95e6-6555015abddb", 00:17:40.008 "strip_size_kb": 0, 00:17:40.008 "state": "online", 00:17:40.008 "raid_level": "raid1", 00:17:40.008 "superblock": true, 00:17:40.008 "num_base_bdevs": 2, 00:17:40.008 "num_base_bdevs_discovered": 2, 00:17:40.008 "num_base_bdevs_operational": 2, 00:17:40.008 "process": { 00:17:40.008 "type": "rebuild", 00:17:40.008 "target": "spare", 00:17:40.008 "progress": { 00:17:40.008 "blocks": 2816, 00:17:40.008 "percent": 35 00:17:40.008 } 00:17:40.008 }, 00:17:40.008 "base_bdevs_list": [ 00:17:40.008 { 00:17:40.008 "name": "spare", 00:17:40.008 "uuid": "fa836986-1046-54fd-8af6-8c9c184ca4a9", 00:17:40.008 "is_configured": true, 00:17:40.008 "data_offset": 256, 00:17:40.008 "data_size": 7936 00:17:40.008 }, 00:17:40.008 { 00:17:40.008 "name": "BaseBdev2", 00:17:40.008 "uuid": "b0416871-8e6a-57f7-a148-f4c9f5a8d0c9", 00:17:40.008 "is_configured": true, 00:17:40.008 "data_offset": 256, 00:17:40.008 "data_size": 7936 00:17:40.008 } 00:17:40.008 ] 00:17:40.008 }' 00:17:40.008 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.008 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:40.008 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.009 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:40.009 11:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:41.389 11:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:41.389 11:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.389 11:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.389 11:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.389 11:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.389 11:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.389 11:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.389 11:02:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.389 11:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.389 11:02:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.389 11:02:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.389 11:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.389 "name": "raid_bdev1", 00:17:41.389 "uuid": "e9caf8c0-4468-422a-95e6-6555015abddb", 00:17:41.389 "strip_size_kb": 0, 00:17:41.389 "state": "online", 00:17:41.389 "raid_level": "raid1", 00:17:41.389 "superblock": true, 00:17:41.389 "num_base_bdevs": 2, 00:17:41.389 "num_base_bdevs_discovered": 2, 00:17:41.389 "num_base_bdevs_operational": 2, 00:17:41.389 "process": { 00:17:41.389 "type": "rebuild", 00:17:41.389 "target": "spare", 00:17:41.389 "progress": { 00:17:41.389 "blocks": 5888, 00:17:41.389 "percent": 74 00:17:41.389 } 00:17:41.389 }, 00:17:41.389 "base_bdevs_list": [ 00:17:41.389 { 00:17:41.389 "name": "spare", 00:17:41.389 "uuid": "fa836986-1046-54fd-8af6-8c9c184ca4a9", 00:17:41.389 "is_configured": true, 00:17:41.389 "data_offset": 256, 00:17:41.389 "data_size": 7936 00:17:41.389 }, 00:17:41.389 { 00:17:41.389 "name": "BaseBdev2", 00:17:41.389 "uuid": "b0416871-8e6a-57f7-a148-f4c9f5a8d0c9", 00:17:41.389 "is_configured": true, 00:17:41.389 "data_offset": 256, 00:17:41.389 "data_size": 7936 00:17:41.389 } 00:17:41.389 ] 00:17:41.389 }' 00:17:41.389 11:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.389 11:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.389 11:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.389 11:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.389 11:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:41.959 [2024-11-15 11:02:48.749773] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:41.959 [2024-11-15 11:02:48.749938] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:41.959 [2024-11-15 11:02:48.750098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.219 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:42.219 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:42.219 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.219 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:42.219 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:42.219 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.219 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.219 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.219 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.219 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.219 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.479 "name": "raid_bdev1", 00:17:42.479 "uuid": "e9caf8c0-4468-422a-95e6-6555015abddb", 00:17:42.479 "strip_size_kb": 0, 00:17:42.479 "state": "online", 00:17:42.479 "raid_level": "raid1", 00:17:42.479 "superblock": true, 00:17:42.479 "num_base_bdevs": 2, 00:17:42.479 "num_base_bdevs_discovered": 2, 00:17:42.479 "num_base_bdevs_operational": 2, 00:17:42.479 "base_bdevs_list": [ 00:17:42.479 { 00:17:42.479 "name": "spare", 00:17:42.479 "uuid": "fa836986-1046-54fd-8af6-8c9c184ca4a9", 00:17:42.479 "is_configured": true, 00:17:42.479 "data_offset": 256, 00:17:42.479 "data_size": 7936 00:17:42.479 }, 00:17:42.479 { 00:17:42.479 "name": "BaseBdev2", 00:17:42.479 "uuid": "b0416871-8e6a-57f7-a148-f4c9f5a8d0c9", 00:17:42.479 "is_configured": true, 00:17:42.479 "data_offset": 256, 00:17:42.479 "data_size": 7936 00:17:42.479 } 00:17:42.479 ] 00:17:42.479 }' 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.479 "name": "raid_bdev1", 00:17:42.479 "uuid": "e9caf8c0-4468-422a-95e6-6555015abddb", 00:17:42.479 "strip_size_kb": 0, 00:17:42.479 "state": "online", 00:17:42.479 "raid_level": "raid1", 00:17:42.479 "superblock": true, 00:17:42.479 "num_base_bdevs": 2, 00:17:42.479 "num_base_bdevs_discovered": 2, 00:17:42.479 "num_base_bdevs_operational": 2, 00:17:42.479 "base_bdevs_list": [ 00:17:42.479 { 00:17:42.479 "name": "spare", 00:17:42.479 "uuid": "fa836986-1046-54fd-8af6-8c9c184ca4a9", 00:17:42.479 "is_configured": true, 00:17:42.479 "data_offset": 256, 00:17:42.479 "data_size": 7936 00:17:42.479 }, 00:17:42.479 { 00:17:42.479 "name": "BaseBdev2", 00:17:42.479 "uuid": "b0416871-8e6a-57f7-a148-f4c9f5a8d0c9", 00:17:42.479 "is_configured": true, 00:17:42.479 "data_offset": 256, 00:17:42.479 "data_size": 7936 00:17:42.479 } 00:17:42.479 ] 00:17:42.479 }' 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.479 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.738 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.738 "name": "raid_bdev1", 00:17:42.738 "uuid": "e9caf8c0-4468-422a-95e6-6555015abddb", 00:17:42.738 "strip_size_kb": 0, 00:17:42.738 "state": "online", 00:17:42.738 "raid_level": "raid1", 00:17:42.738 "superblock": true, 00:17:42.738 "num_base_bdevs": 2, 00:17:42.738 "num_base_bdevs_discovered": 2, 00:17:42.738 "num_base_bdevs_operational": 2, 00:17:42.738 "base_bdevs_list": [ 00:17:42.738 { 00:17:42.738 "name": "spare", 00:17:42.738 "uuid": "fa836986-1046-54fd-8af6-8c9c184ca4a9", 00:17:42.738 "is_configured": true, 00:17:42.738 "data_offset": 256, 00:17:42.738 "data_size": 7936 00:17:42.738 }, 00:17:42.738 { 00:17:42.738 "name": "BaseBdev2", 00:17:42.738 "uuid": "b0416871-8e6a-57f7-a148-f4c9f5a8d0c9", 00:17:42.738 "is_configured": true, 00:17:42.738 "data_offset": 256, 00:17:42.738 "data_size": 7936 00:17:42.738 } 00:17:42.738 ] 00:17:42.738 }' 00:17:42.738 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.738 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.996 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:42.996 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.996 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.996 [2024-11-15 11:02:49.842712] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:42.996 [2024-11-15 11:02:49.842819] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:42.996 [2024-11-15 11:02:49.842935] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:42.996 [2024-11-15 11:02:49.843038] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:42.996 [2024-11-15 11:02:49.843089] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:42.996 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.996 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.996 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.996 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.996 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:42.996 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.996 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:42.996 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:42.996 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:42.996 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:42.996 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:42.996 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:42.996 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:42.996 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:42.996 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:42.996 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:42.996 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:42.996 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:42.996 11:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:43.257 /dev/nbd0 00:17:43.257 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:43.257 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:43.257 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:43.257 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:17:43.257 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:43.257 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:43.257 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:43.257 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:17:43.257 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:43.257 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:43.257 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:43.257 1+0 records in 00:17:43.257 1+0 records out 00:17:43.257 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259397 s, 15.8 MB/s 00:17:43.257 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:43.257 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:17:43.257 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:43.517 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:43.517 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:17:43.517 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:43.517 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:43.517 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:43.517 /dev/nbd1 00:17:43.517 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:43.517 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:43.517 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:17:43.517 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:17:43.517 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:43.517 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:43.517 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:17:43.517 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:17:43.517 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:43.517 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:43.517 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:43.517 1+0 records in 00:17:43.517 1+0 records out 00:17:43.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030546 s, 13.4 MB/s 00:17:43.517 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:43.517 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:17:43.517 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:43.776 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:43.777 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:17:43.777 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:43.777 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:43.777 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:43.777 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:43.777 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:43.777 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:43.777 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:43.777 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:43.777 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:43.777 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:44.035 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:44.035 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:44.035 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:44.035 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:44.035 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:44.035 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:44.035 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:44.035 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:44.035 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:44.035 11:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:44.294 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:44.294 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:44.294 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:44.294 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:44.294 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:44.294 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:44.294 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:44.294 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:44.294 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:44.294 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:44.294 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.294 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.294 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.294 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:44.294 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.294 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.294 [2024-11-15 11:02:51.133845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:44.294 [2024-11-15 11:02:51.133907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.294 [2024-11-15 11:02:51.133933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:44.294 [2024-11-15 11:02:51.133942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.294 [2024-11-15 11:02:51.136124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.294 [2024-11-15 11:02:51.136163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:44.294 [2024-11-15 11:02:51.136259] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:44.294 [2024-11-15 11:02:51.136336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:44.294 [2024-11-15 11:02:51.136520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:44.294 spare 00:17:44.294 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.294 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:44.294 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.294 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.553 [2024-11-15 11:02:51.236444] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:44.553 [2024-11-15 11:02:51.236487] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:44.553 [2024-11-15 11:02:51.236820] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:44.553 [2024-11-15 11:02:51.237024] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:44.553 [2024-11-15 11:02:51.237038] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:44.553 [2024-11-15 11:02:51.237233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.553 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.553 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:44.553 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.553 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.553 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.553 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.553 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:44.553 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.553 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.553 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.554 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.554 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.554 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.554 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.554 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.554 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.554 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.554 "name": "raid_bdev1", 00:17:44.554 "uuid": "e9caf8c0-4468-422a-95e6-6555015abddb", 00:17:44.554 "strip_size_kb": 0, 00:17:44.554 "state": "online", 00:17:44.554 "raid_level": "raid1", 00:17:44.554 "superblock": true, 00:17:44.554 "num_base_bdevs": 2, 00:17:44.554 "num_base_bdevs_discovered": 2, 00:17:44.554 "num_base_bdevs_operational": 2, 00:17:44.554 "base_bdevs_list": [ 00:17:44.554 { 00:17:44.554 "name": "spare", 00:17:44.554 "uuid": "fa836986-1046-54fd-8af6-8c9c184ca4a9", 00:17:44.554 "is_configured": true, 00:17:44.554 "data_offset": 256, 00:17:44.554 "data_size": 7936 00:17:44.554 }, 00:17:44.554 { 00:17:44.554 "name": "BaseBdev2", 00:17:44.554 "uuid": "b0416871-8e6a-57f7-a148-f4c9f5a8d0c9", 00:17:44.554 "is_configured": true, 00:17:44.554 "data_offset": 256, 00:17:44.554 "data_size": 7936 00:17:44.554 } 00:17:44.554 ] 00:17:44.554 }' 00:17:44.554 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.554 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.813 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:44.813 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.813 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:44.813 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:44.813 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.813 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.813 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.813 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.813 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.813 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.074 "name": "raid_bdev1", 00:17:45.074 "uuid": "e9caf8c0-4468-422a-95e6-6555015abddb", 00:17:45.074 "strip_size_kb": 0, 00:17:45.074 "state": "online", 00:17:45.074 "raid_level": "raid1", 00:17:45.074 "superblock": true, 00:17:45.074 "num_base_bdevs": 2, 00:17:45.074 "num_base_bdevs_discovered": 2, 00:17:45.074 "num_base_bdevs_operational": 2, 00:17:45.074 "base_bdevs_list": [ 00:17:45.074 { 00:17:45.074 "name": "spare", 00:17:45.074 "uuid": "fa836986-1046-54fd-8af6-8c9c184ca4a9", 00:17:45.074 "is_configured": true, 00:17:45.074 "data_offset": 256, 00:17:45.074 "data_size": 7936 00:17:45.074 }, 00:17:45.074 { 00:17:45.074 "name": "BaseBdev2", 00:17:45.074 "uuid": "b0416871-8e6a-57f7-a148-f4c9f5a8d0c9", 00:17:45.074 "is_configured": true, 00:17:45.074 "data_offset": 256, 00:17:45.074 "data_size": 7936 00:17:45.074 } 00:17:45.074 ] 00:17:45.074 }' 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.074 [2024-11-15 11:02:51.924585] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.074 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.074 "name": "raid_bdev1", 00:17:45.074 "uuid": "e9caf8c0-4468-422a-95e6-6555015abddb", 00:17:45.074 "strip_size_kb": 0, 00:17:45.074 "state": "online", 00:17:45.074 "raid_level": "raid1", 00:17:45.074 "superblock": true, 00:17:45.074 "num_base_bdevs": 2, 00:17:45.074 "num_base_bdevs_discovered": 1, 00:17:45.074 "num_base_bdevs_operational": 1, 00:17:45.074 "base_bdevs_list": [ 00:17:45.074 { 00:17:45.074 "name": null, 00:17:45.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.074 "is_configured": false, 00:17:45.074 "data_offset": 0, 00:17:45.074 "data_size": 7936 00:17:45.074 }, 00:17:45.074 { 00:17:45.074 "name": "BaseBdev2", 00:17:45.075 "uuid": "b0416871-8e6a-57f7-a148-f4c9f5a8d0c9", 00:17:45.075 "is_configured": true, 00:17:45.075 "data_offset": 256, 00:17:45.075 "data_size": 7936 00:17:45.075 } 00:17:45.075 ] 00:17:45.075 }' 00:17:45.075 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.075 11:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.645 11:02:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:45.645 11:02:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.645 11:02:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.645 [2024-11-15 11:02:52.408562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:45.645 [2024-11-15 11:02:52.408838] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:45.645 [2024-11-15 11:02:52.408921] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:45.645 [2024-11-15 11:02:52.408994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:45.645 [2024-11-15 11:02:52.424897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:45.645 11:02:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.645 11:02:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:45.645 [2024-11-15 11:02:52.426861] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:46.584 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:46.584 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.584 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:46.584 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:46.584 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.584 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.584 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.584 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.584 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.584 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.584 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.584 "name": "raid_bdev1", 00:17:46.584 "uuid": "e9caf8c0-4468-422a-95e6-6555015abddb", 00:17:46.584 "strip_size_kb": 0, 00:17:46.584 "state": "online", 00:17:46.584 "raid_level": "raid1", 00:17:46.584 "superblock": true, 00:17:46.584 "num_base_bdevs": 2, 00:17:46.584 "num_base_bdevs_discovered": 2, 00:17:46.584 "num_base_bdevs_operational": 2, 00:17:46.584 "process": { 00:17:46.584 "type": "rebuild", 00:17:46.584 "target": "spare", 00:17:46.584 "progress": { 00:17:46.584 "blocks": 2560, 00:17:46.584 "percent": 32 00:17:46.584 } 00:17:46.584 }, 00:17:46.584 "base_bdevs_list": [ 00:17:46.584 { 00:17:46.584 "name": "spare", 00:17:46.584 "uuid": "fa836986-1046-54fd-8af6-8c9c184ca4a9", 00:17:46.584 "is_configured": true, 00:17:46.584 "data_offset": 256, 00:17:46.584 "data_size": 7936 00:17:46.584 }, 00:17:46.584 { 00:17:46.584 "name": "BaseBdev2", 00:17:46.584 "uuid": "b0416871-8e6a-57f7-a148-f4c9f5a8d0c9", 00:17:46.584 "is_configured": true, 00:17:46.584 "data_offset": 256, 00:17:46.584 "data_size": 7936 00:17:46.584 } 00:17:46.584 ] 00:17:46.584 }' 00:17:46.584 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.844 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:46.844 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.844 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:46.844 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:46.844 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.844 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.844 [2024-11-15 11:02:53.582389] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:46.844 [2024-11-15 11:02:53.632647] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:46.844 [2024-11-15 11:02:53.632765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.844 [2024-11-15 11:02:53.632781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:46.844 [2024-11-15 11:02:53.632791] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:46.844 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.844 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:46.844 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.844 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.844 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.844 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.844 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:46.844 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.844 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.844 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.844 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.844 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.844 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.844 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.844 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.844 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.844 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.844 "name": "raid_bdev1", 00:17:46.844 "uuid": "e9caf8c0-4468-422a-95e6-6555015abddb", 00:17:46.844 "strip_size_kb": 0, 00:17:46.844 "state": "online", 00:17:46.844 "raid_level": "raid1", 00:17:46.844 "superblock": true, 00:17:46.844 "num_base_bdevs": 2, 00:17:46.844 "num_base_bdevs_discovered": 1, 00:17:46.844 "num_base_bdevs_operational": 1, 00:17:46.844 "base_bdevs_list": [ 00:17:46.844 { 00:17:46.844 "name": null, 00:17:46.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.844 "is_configured": false, 00:17:46.844 "data_offset": 0, 00:17:46.844 "data_size": 7936 00:17:46.844 }, 00:17:46.844 { 00:17:46.844 "name": "BaseBdev2", 00:17:46.844 "uuid": "b0416871-8e6a-57f7-a148-f4c9f5a8d0c9", 00:17:46.844 "is_configured": true, 00:17:46.844 "data_offset": 256, 00:17:46.844 "data_size": 7936 00:17:46.844 } 00:17:46.844 ] 00:17:46.844 }' 00:17:46.844 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.844 11:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.413 11:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:47.413 11:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.413 11:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.413 [2024-11-15 11:02:54.149751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:47.413 [2024-11-15 11:02:54.149903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.413 [2024-11-15 11:02:54.149948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:47.413 [2024-11-15 11:02:54.149998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.413 [2024-11-15 11:02:54.150572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.413 [2024-11-15 11:02:54.150636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:47.413 [2024-11-15 11:02:54.150756] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:47.413 [2024-11-15 11:02:54.150800] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:47.413 [2024-11-15 11:02:54.150846] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:47.413 [2024-11-15 11:02:54.150902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:47.413 [2024-11-15 11:02:54.167319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:47.413 spare 00:17:47.413 11:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.413 [2024-11-15 11:02:54.169285] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:47.413 11:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:48.350 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:48.350 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.350 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:48.350 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:48.350 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.350 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.350 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.350 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.350 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.350 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.350 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.350 "name": "raid_bdev1", 00:17:48.350 "uuid": "e9caf8c0-4468-422a-95e6-6555015abddb", 00:17:48.350 "strip_size_kb": 0, 00:17:48.350 "state": "online", 00:17:48.350 "raid_level": "raid1", 00:17:48.350 "superblock": true, 00:17:48.350 "num_base_bdevs": 2, 00:17:48.350 "num_base_bdevs_discovered": 2, 00:17:48.350 "num_base_bdevs_operational": 2, 00:17:48.350 "process": { 00:17:48.350 "type": "rebuild", 00:17:48.350 "target": "spare", 00:17:48.350 "progress": { 00:17:48.350 "blocks": 2560, 00:17:48.350 "percent": 32 00:17:48.350 } 00:17:48.350 }, 00:17:48.350 "base_bdevs_list": [ 00:17:48.350 { 00:17:48.350 "name": "spare", 00:17:48.350 "uuid": "fa836986-1046-54fd-8af6-8c9c184ca4a9", 00:17:48.350 "is_configured": true, 00:17:48.350 "data_offset": 256, 00:17:48.350 "data_size": 7936 00:17:48.350 }, 00:17:48.350 { 00:17:48.350 "name": "BaseBdev2", 00:17:48.350 "uuid": "b0416871-8e6a-57f7-a148-f4c9f5a8d0c9", 00:17:48.350 "is_configured": true, 00:17:48.350 "data_offset": 256, 00:17:48.350 "data_size": 7936 00:17:48.350 } 00:17:48.350 ] 00:17:48.350 }' 00:17:48.350 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.350 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:48.350 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.610 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:48.610 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:48.610 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.610 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.610 [2024-11-15 11:02:55.328808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:48.610 [2024-11-15 11:02:55.375059] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:48.610 [2024-11-15 11:02:55.375148] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.610 [2024-11-15 11:02:55.375167] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:48.610 [2024-11-15 11:02:55.375175] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:48.610 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.610 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:48.610 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.610 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.610 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.610 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.610 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:48.610 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.610 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.610 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.610 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.610 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.610 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.610 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.610 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.610 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.610 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.610 "name": "raid_bdev1", 00:17:48.610 "uuid": "e9caf8c0-4468-422a-95e6-6555015abddb", 00:17:48.610 "strip_size_kb": 0, 00:17:48.610 "state": "online", 00:17:48.610 "raid_level": "raid1", 00:17:48.610 "superblock": true, 00:17:48.610 "num_base_bdevs": 2, 00:17:48.610 "num_base_bdevs_discovered": 1, 00:17:48.610 "num_base_bdevs_operational": 1, 00:17:48.610 "base_bdevs_list": [ 00:17:48.610 { 00:17:48.610 "name": null, 00:17:48.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.610 "is_configured": false, 00:17:48.610 "data_offset": 0, 00:17:48.610 "data_size": 7936 00:17:48.610 }, 00:17:48.610 { 00:17:48.610 "name": "BaseBdev2", 00:17:48.610 "uuid": "b0416871-8e6a-57f7-a148-f4c9f5a8d0c9", 00:17:48.610 "is_configured": true, 00:17:48.610 "data_offset": 256, 00:17:48.610 "data_size": 7936 00:17:48.610 } 00:17:48.610 ] 00:17:48.610 }' 00:17:48.610 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.610 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.176 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:49.176 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.176 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:49.176 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:49.176 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.176 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.176 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.176 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.176 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.176 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.176 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.176 "name": "raid_bdev1", 00:17:49.176 "uuid": "e9caf8c0-4468-422a-95e6-6555015abddb", 00:17:49.176 "strip_size_kb": 0, 00:17:49.176 "state": "online", 00:17:49.176 "raid_level": "raid1", 00:17:49.176 "superblock": true, 00:17:49.176 "num_base_bdevs": 2, 00:17:49.176 "num_base_bdevs_discovered": 1, 00:17:49.176 "num_base_bdevs_operational": 1, 00:17:49.176 "base_bdevs_list": [ 00:17:49.176 { 00:17:49.176 "name": null, 00:17:49.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.176 "is_configured": false, 00:17:49.176 "data_offset": 0, 00:17:49.176 "data_size": 7936 00:17:49.176 }, 00:17:49.176 { 00:17:49.176 "name": "BaseBdev2", 00:17:49.176 "uuid": "b0416871-8e6a-57f7-a148-f4c9f5a8d0c9", 00:17:49.176 "is_configured": true, 00:17:49.176 "data_offset": 256, 00:17:49.176 "data_size": 7936 00:17:49.176 } 00:17:49.176 ] 00:17:49.176 }' 00:17:49.176 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.176 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:49.176 11:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.176 11:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:49.176 11:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:49.176 11:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.176 11:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.176 11:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.176 11:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:49.176 11:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.176 11:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.176 [2024-11-15 11:02:56.052512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:49.176 [2024-11-15 11:02:56.052582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.176 [2024-11-15 11:02:56.052606] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:49.176 [2024-11-15 11:02:56.052625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.176 [2024-11-15 11:02:56.053089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.176 [2024-11-15 11:02:56.053105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:49.176 [2024-11-15 11:02:56.053193] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:49.176 [2024-11-15 11:02:56.053207] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:49.176 [2024-11-15 11:02:56.053216] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:49.176 [2024-11-15 11:02:56.053227] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:49.176 BaseBdev1 00:17:49.176 11:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.176 11:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:50.555 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:50.555 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.555 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.555 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.555 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.555 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:50.555 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.555 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.555 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.555 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.555 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.555 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.555 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.555 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.555 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.555 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.555 "name": "raid_bdev1", 00:17:50.555 "uuid": "e9caf8c0-4468-422a-95e6-6555015abddb", 00:17:50.555 "strip_size_kb": 0, 00:17:50.555 "state": "online", 00:17:50.555 "raid_level": "raid1", 00:17:50.555 "superblock": true, 00:17:50.555 "num_base_bdevs": 2, 00:17:50.555 "num_base_bdevs_discovered": 1, 00:17:50.555 "num_base_bdevs_operational": 1, 00:17:50.555 "base_bdevs_list": [ 00:17:50.555 { 00:17:50.555 "name": null, 00:17:50.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.555 "is_configured": false, 00:17:50.555 "data_offset": 0, 00:17:50.555 "data_size": 7936 00:17:50.555 }, 00:17:50.555 { 00:17:50.555 "name": "BaseBdev2", 00:17:50.555 "uuid": "b0416871-8e6a-57f7-a148-f4c9f5a8d0c9", 00:17:50.555 "is_configured": true, 00:17:50.556 "data_offset": 256, 00:17:50.556 "data_size": 7936 00:17:50.556 } 00:17:50.556 ] 00:17:50.556 }' 00:17:50.556 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.556 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.556 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:50.556 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.556 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:50.556 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:50.556 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.817 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.817 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.817 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.817 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.817 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.817 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.817 "name": "raid_bdev1", 00:17:50.817 "uuid": "e9caf8c0-4468-422a-95e6-6555015abddb", 00:17:50.817 "strip_size_kb": 0, 00:17:50.817 "state": "online", 00:17:50.817 "raid_level": "raid1", 00:17:50.817 "superblock": true, 00:17:50.817 "num_base_bdevs": 2, 00:17:50.817 "num_base_bdevs_discovered": 1, 00:17:50.817 "num_base_bdevs_operational": 1, 00:17:50.817 "base_bdevs_list": [ 00:17:50.817 { 00:17:50.817 "name": null, 00:17:50.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.817 "is_configured": false, 00:17:50.817 "data_offset": 0, 00:17:50.817 "data_size": 7936 00:17:50.817 }, 00:17:50.817 { 00:17:50.817 "name": "BaseBdev2", 00:17:50.817 "uuid": "b0416871-8e6a-57f7-a148-f4c9f5a8d0c9", 00:17:50.817 "is_configured": true, 00:17:50.817 "data_offset": 256, 00:17:50.817 "data_size": 7936 00:17:50.817 } 00:17:50.817 ] 00:17:50.817 }' 00:17:50.817 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.817 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:50.817 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.817 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:50.817 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:50.817 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:17:50.817 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:50.817 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:50.817 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.817 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:50.817 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.817 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:50.817 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.817 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.818 [2024-11-15 11:02:57.622063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:50.818 [2024-11-15 11:02:57.622311] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:50.818 [2024-11-15 11:02:57.622383] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:50.818 request: 00:17:50.818 { 00:17:50.818 "base_bdev": "BaseBdev1", 00:17:50.818 "raid_bdev": "raid_bdev1", 00:17:50.818 "method": "bdev_raid_add_base_bdev", 00:17:50.818 "req_id": 1 00:17:50.818 } 00:17:50.818 Got JSON-RPC error response 00:17:50.818 response: 00:17:50.818 { 00:17:50.818 "code": -22, 00:17:50.818 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:50.818 } 00:17:50.818 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:50.818 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:17:50.818 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:50.818 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:50.818 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:50.818 11:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:51.776 11:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:51.776 11:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.776 11:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.776 11:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.776 11:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.776 11:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:51.776 11:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.776 11:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.776 11:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.776 11:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.776 11:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.776 11:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.776 11:02:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.776 11:02:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.776 11:02:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.776 11:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.776 "name": "raid_bdev1", 00:17:51.776 "uuid": "e9caf8c0-4468-422a-95e6-6555015abddb", 00:17:51.776 "strip_size_kb": 0, 00:17:51.776 "state": "online", 00:17:51.776 "raid_level": "raid1", 00:17:51.776 "superblock": true, 00:17:51.776 "num_base_bdevs": 2, 00:17:51.776 "num_base_bdevs_discovered": 1, 00:17:51.776 "num_base_bdevs_operational": 1, 00:17:51.776 "base_bdevs_list": [ 00:17:51.776 { 00:17:51.776 "name": null, 00:17:51.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.776 "is_configured": false, 00:17:51.776 "data_offset": 0, 00:17:51.776 "data_size": 7936 00:17:51.776 }, 00:17:51.776 { 00:17:51.776 "name": "BaseBdev2", 00:17:51.776 "uuid": "b0416871-8e6a-57f7-a148-f4c9f5a8d0c9", 00:17:51.776 "is_configured": true, 00:17:51.776 "data_offset": 256, 00:17:51.776 "data_size": 7936 00:17:51.776 } 00:17:51.776 ] 00:17:51.776 }' 00:17:51.776 11:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.776 11:02:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.344 11:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:52.344 11:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.344 11:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:52.344 11:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:52.344 11:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.344 11:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.344 11:02:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.344 11:02:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.344 11:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.344 11:02:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.344 11:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.344 "name": "raid_bdev1", 00:17:52.344 "uuid": "e9caf8c0-4468-422a-95e6-6555015abddb", 00:17:52.344 "strip_size_kb": 0, 00:17:52.344 "state": "online", 00:17:52.344 "raid_level": "raid1", 00:17:52.344 "superblock": true, 00:17:52.344 "num_base_bdevs": 2, 00:17:52.344 "num_base_bdevs_discovered": 1, 00:17:52.344 "num_base_bdevs_operational": 1, 00:17:52.344 "base_bdevs_list": [ 00:17:52.344 { 00:17:52.344 "name": null, 00:17:52.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.344 "is_configured": false, 00:17:52.344 "data_offset": 0, 00:17:52.344 "data_size": 7936 00:17:52.344 }, 00:17:52.344 { 00:17:52.344 "name": "BaseBdev2", 00:17:52.344 "uuid": "b0416871-8e6a-57f7-a148-f4c9f5a8d0c9", 00:17:52.344 "is_configured": true, 00:17:52.344 "data_offset": 256, 00:17:52.344 "data_size": 7936 00:17:52.344 } 00:17:52.344 ] 00:17:52.344 }' 00:17:52.344 11:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.344 11:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:52.344 11:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.344 11:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:52.344 11:02:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86688 00:17:52.344 11:02:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 86688 ']' 00:17:52.344 11:02:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 86688 00:17:52.344 11:02:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:17:52.603 11:02:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:52.603 11:02:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86688 00:17:52.603 killing process with pid 86688 00:17:52.603 Received shutdown signal, test time was about 60.000000 seconds 00:17:52.603 00:17:52.603 Latency(us) 00:17:52.603 [2024-11-15T11:02:59.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.603 [2024-11-15T11:02:59.531Z] =================================================================================================================== 00:17:52.603 [2024-11-15T11:02:59.531Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:52.603 11:02:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:52.603 11:02:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:52.603 11:02:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86688' 00:17:52.603 11:02:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@971 -- # kill 86688 00:17:52.603 [2024-11-15 11:02:59.292248] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:52.603 [2024-11-15 11:02:59.292407] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:52.603 11:02:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@976 -- # wait 86688 00:17:52.603 [2024-11-15 11:02:59.292463] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:52.603 [2024-11-15 11:02:59.292477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:52.861 [2024-11-15 11:02:59.594318] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:53.800 11:03:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:17:53.800 00:17:53.800 real 0m20.176s 00:17:53.800 user 0m26.577s 00:17:53.800 sys 0m2.618s 00:17:53.800 11:03:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:53.800 ************************************ 00:17:53.800 END TEST raid_rebuild_test_sb_4k 00:17:53.800 ************************************ 00:17:53.800 11:03:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.060 11:03:00 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:17:54.060 11:03:00 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:54.060 11:03:00 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:54.060 11:03:00 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:54.060 11:03:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:54.060 ************************************ 00:17:54.060 START TEST raid_state_function_test_sb_md_separate 00:17:54.060 ************************************ 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87378 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:54.060 Process raid pid: 87378 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87378' 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87378 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 87378 ']' 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:54.060 11:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.060 [2024-11-15 11:03:00.841687] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:17:54.060 [2024-11-15 11:03:00.841939] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.319 [2024-11-15 11:03:01.020107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.319 [2024-11-15 11:03:01.139245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.577 [2024-11-15 11:03:01.343211] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:54.577 [2024-11-15 11:03:01.343360] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:54.836 11:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:54.836 11:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:17:54.836 11:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:54.836 11:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.836 11:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.836 [2024-11-15 11:03:01.684046] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:54.836 [2024-11-15 11:03:01.684153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:54.836 [2024-11-15 11:03:01.684189] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:54.836 [2024-11-15 11:03:01.684214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:54.836 11:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.836 11:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:54.836 11:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:54.836 11:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:54.836 11:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.836 11:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.836 11:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:54.836 11:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.836 11:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.836 11:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.836 11:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.837 11:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.837 11:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.837 11:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.837 11:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.837 11:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.837 11:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.837 "name": "Existed_Raid", 00:17:54.837 "uuid": "7be19635-506a-4fed-9ed4-288312bdf8c9", 00:17:54.837 "strip_size_kb": 0, 00:17:54.837 "state": "configuring", 00:17:54.837 "raid_level": "raid1", 00:17:54.837 "superblock": true, 00:17:54.837 "num_base_bdevs": 2, 00:17:54.837 "num_base_bdevs_discovered": 0, 00:17:54.837 "num_base_bdevs_operational": 2, 00:17:54.837 "base_bdevs_list": [ 00:17:54.837 { 00:17:54.837 "name": "BaseBdev1", 00:17:54.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.837 "is_configured": false, 00:17:54.837 "data_offset": 0, 00:17:54.837 "data_size": 0 00:17:54.837 }, 00:17:54.837 { 00:17:54.837 "name": "BaseBdev2", 00:17:54.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.837 "is_configured": false, 00:17:54.837 "data_offset": 0, 00:17:54.837 "data_size": 0 00:17:54.837 } 00:17:54.837 ] 00:17:54.837 }' 00:17:54.837 11:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.837 11:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.404 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:55.404 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.404 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.404 [2024-11-15 11:03:02.111234] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:55.404 [2024-11-15 11:03:02.111381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:55.404 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.404 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:55.404 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.404 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.404 [2024-11-15 11:03:02.123193] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:55.404 [2024-11-15 11:03:02.123280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:55.404 [2024-11-15 11:03:02.123326] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:55.404 [2024-11-15 11:03:02.123373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:55.404 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.404 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:55.404 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.404 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.404 [2024-11-15 11:03:02.173335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:55.404 BaseBdev1 00:17:55.404 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.404 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:55.404 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.405 [ 00:17:55.405 { 00:17:55.405 "name": "BaseBdev1", 00:17:55.405 "aliases": [ 00:17:55.405 "0753404a-5752-4b0c-9fac-bd368d6c98c9" 00:17:55.405 ], 00:17:55.405 "product_name": "Malloc disk", 00:17:55.405 "block_size": 4096, 00:17:55.405 "num_blocks": 8192, 00:17:55.405 "uuid": "0753404a-5752-4b0c-9fac-bd368d6c98c9", 00:17:55.405 "md_size": 32, 00:17:55.405 "md_interleave": false, 00:17:55.405 "dif_type": 0, 00:17:55.405 "assigned_rate_limits": { 00:17:55.405 "rw_ios_per_sec": 0, 00:17:55.405 "rw_mbytes_per_sec": 0, 00:17:55.405 "r_mbytes_per_sec": 0, 00:17:55.405 "w_mbytes_per_sec": 0 00:17:55.405 }, 00:17:55.405 "claimed": true, 00:17:55.405 "claim_type": "exclusive_write", 00:17:55.405 "zoned": false, 00:17:55.405 "supported_io_types": { 00:17:55.405 "read": true, 00:17:55.405 "write": true, 00:17:55.405 "unmap": true, 00:17:55.405 "flush": true, 00:17:55.405 "reset": true, 00:17:55.405 "nvme_admin": false, 00:17:55.405 "nvme_io": false, 00:17:55.405 "nvme_io_md": false, 00:17:55.405 "write_zeroes": true, 00:17:55.405 "zcopy": true, 00:17:55.405 "get_zone_info": false, 00:17:55.405 "zone_management": false, 00:17:55.405 "zone_append": false, 00:17:55.405 "compare": false, 00:17:55.405 "compare_and_write": false, 00:17:55.405 "abort": true, 00:17:55.405 "seek_hole": false, 00:17:55.405 "seek_data": false, 00:17:55.405 "copy": true, 00:17:55.405 "nvme_iov_md": false 00:17:55.405 }, 00:17:55.405 "memory_domains": [ 00:17:55.405 { 00:17:55.405 "dma_device_id": "system", 00:17:55.405 "dma_device_type": 1 00:17:55.405 }, 00:17:55.405 { 00:17:55.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.405 "dma_device_type": 2 00:17:55.405 } 00:17:55.405 ], 00:17:55.405 "driver_specific": {} 00:17:55.405 } 00:17:55.405 ] 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.405 "name": "Existed_Raid", 00:17:55.405 "uuid": "cc776746-360f-4c7d-9682-548078c08199", 00:17:55.405 "strip_size_kb": 0, 00:17:55.405 "state": "configuring", 00:17:55.405 "raid_level": "raid1", 00:17:55.405 "superblock": true, 00:17:55.405 "num_base_bdevs": 2, 00:17:55.405 "num_base_bdevs_discovered": 1, 00:17:55.405 "num_base_bdevs_operational": 2, 00:17:55.405 "base_bdevs_list": [ 00:17:55.405 { 00:17:55.405 "name": "BaseBdev1", 00:17:55.405 "uuid": "0753404a-5752-4b0c-9fac-bd368d6c98c9", 00:17:55.405 "is_configured": true, 00:17:55.405 "data_offset": 256, 00:17:55.405 "data_size": 7936 00:17:55.405 }, 00:17:55.405 { 00:17:55.405 "name": "BaseBdev2", 00:17:55.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.405 "is_configured": false, 00:17:55.405 "data_offset": 0, 00:17:55.405 "data_size": 0 00:17:55.405 } 00:17:55.405 ] 00:17:55.405 }' 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.405 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.972 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:55.972 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.972 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.972 [2024-11-15 11:03:02.640590] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:55.972 [2024-11-15 11:03:02.640649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:55.972 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.972 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:55.972 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.972 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.972 [2024-11-15 11:03:02.652612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:55.972 [2024-11-15 11:03:02.654523] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:55.972 [2024-11-15 11:03:02.654599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:55.972 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.972 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:55.972 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:55.972 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:55.972 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:55.972 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:55.972 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.972 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.972 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:55.972 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.972 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.972 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.972 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.972 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.972 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.972 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.972 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.972 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.972 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.972 "name": "Existed_Raid", 00:17:55.972 "uuid": "dfbc83bd-8634-4eba-90d6-a7c6e913c969", 00:17:55.972 "strip_size_kb": 0, 00:17:55.972 "state": "configuring", 00:17:55.972 "raid_level": "raid1", 00:17:55.972 "superblock": true, 00:17:55.972 "num_base_bdevs": 2, 00:17:55.972 "num_base_bdevs_discovered": 1, 00:17:55.972 "num_base_bdevs_operational": 2, 00:17:55.972 "base_bdevs_list": [ 00:17:55.972 { 00:17:55.972 "name": "BaseBdev1", 00:17:55.972 "uuid": "0753404a-5752-4b0c-9fac-bd368d6c98c9", 00:17:55.972 "is_configured": true, 00:17:55.972 "data_offset": 256, 00:17:55.972 "data_size": 7936 00:17:55.972 }, 00:17:55.972 { 00:17:55.972 "name": "BaseBdev2", 00:17:55.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.972 "is_configured": false, 00:17:55.972 "data_offset": 0, 00:17:55.972 "data_size": 0 00:17:55.972 } 00:17:55.972 ] 00:17:55.972 }' 00:17:55.972 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.972 11:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.232 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:56.232 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.232 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.232 [2024-11-15 11:03:03.125796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:56.232 [2024-11-15 11:03:03.126021] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:56.232 [2024-11-15 11:03:03.126035] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:56.232 [2024-11-15 11:03:03.126114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:56.232 [2024-11-15 11:03:03.126272] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:56.232 [2024-11-15 11:03:03.126284] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:56.232 [2024-11-15 11:03:03.126437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.232 BaseBdev2 00:17:56.232 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.232 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:56.232 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:56.232 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:56.232 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:17:56.232 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:56.232 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:56.232 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:56.232 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.232 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.232 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.232 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:56.232 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.232 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.232 [ 00:17:56.232 { 00:17:56.232 "name": "BaseBdev2", 00:17:56.232 "aliases": [ 00:17:56.232 "0f3d944d-96bb-4370-baf6-85e67e8c621e" 00:17:56.232 ], 00:17:56.232 "product_name": "Malloc disk", 00:17:56.232 "block_size": 4096, 00:17:56.232 "num_blocks": 8192, 00:17:56.232 "uuid": "0f3d944d-96bb-4370-baf6-85e67e8c621e", 00:17:56.232 "md_size": 32, 00:17:56.232 "md_interleave": false, 00:17:56.232 "dif_type": 0, 00:17:56.232 "assigned_rate_limits": { 00:17:56.232 "rw_ios_per_sec": 0, 00:17:56.232 "rw_mbytes_per_sec": 0, 00:17:56.232 "r_mbytes_per_sec": 0, 00:17:56.232 "w_mbytes_per_sec": 0 00:17:56.232 }, 00:17:56.232 "claimed": true, 00:17:56.232 "claim_type": "exclusive_write", 00:17:56.493 "zoned": false, 00:17:56.493 "supported_io_types": { 00:17:56.493 "read": true, 00:17:56.493 "write": true, 00:17:56.493 "unmap": true, 00:17:56.493 "flush": true, 00:17:56.493 "reset": true, 00:17:56.493 "nvme_admin": false, 00:17:56.493 "nvme_io": false, 00:17:56.493 "nvme_io_md": false, 00:17:56.493 "write_zeroes": true, 00:17:56.493 "zcopy": true, 00:17:56.493 "get_zone_info": false, 00:17:56.493 "zone_management": false, 00:17:56.493 "zone_append": false, 00:17:56.493 "compare": false, 00:17:56.493 "compare_and_write": false, 00:17:56.493 "abort": true, 00:17:56.493 "seek_hole": false, 00:17:56.493 "seek_data": false, 00:17:56.493 "copy": true, 00:17:56.493 "nvme_iov_md": false 00:17:56.493 }, 00:17:56.493 "memory_domains": [ 00:17:56.493 { 00:17:56.493 "dma_device_id": "system", 00:17:56.493 "dma_device_type": 1 00:17:56.493 }, 00:17:56.493 { 00:17:56.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.493 "dma_device_type": 2 00:17:56.493 } 00:17:56.493 ], 00:17:56.493 "driver_specific": {} 00:17:56.493 } 00:17:56.493 ] 00:17:56.493 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.493 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:17:56.493 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:56.493 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:56.493 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:56.493 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:56.493 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.493 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.493 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.493 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:56.493 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.493 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.493 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.493 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.493 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.493 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.493 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.493 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.493 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.493 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.493 "name": "Existed_Raid", 00:17:56.493 "uuid": "dfbc83bd-8634-4eba-90d6-a7c6e913c969", 00:17:56.493 "strip_size_kb": 0, 00:17:56.493 "state": "online", 00:17:56.493 "raid_level": "raid1", 00:17:56.493 "superblock": true, 00:17:56.493 "num_base_bdevs": 2, 00:17:56.493 "num_base_bdevs_discovered": 2, 00:17:56.493 "num_base_bdevs_operational": 2, 00:17:56.493 "base_bdevs_list": [ 00:17:56.493 { 00:17:56.493 "name": "BaseBdev1", 00:17:56.493 "uuid": "0753404a-5752-4b0c-9fac-bd368d6c98c9", 00:17:56.493 "is_configured": true, 00:17:56.493 "data_offset": 256, 00:17:56.493 "data_size": 7936 00:17:56.493 }, 00:17:56.493 { 00:17:56.493 "name": "BaseBdev2", 00:17:56.493 "uuid": "0f3d944d-96bb-4370-baf6-85e67e8c621e", 00:17:56.493 "is_configured": true, 00:17:56.493 "data_offset": 256, 00:17:56.493 "data_size": 7936 00:17:56.493 } 00:17:56.493 ] 00:17:56.493 }' 00:17:56.493 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.493 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.752 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:56.752 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:56.752 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:56.752 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:56.752 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:56.752 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:56.752 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:56.752 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:56.752 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.752 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.752 [2024-11-15 11:03:03.625448] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:56.752 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.752 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:56.752 "name": "Existed_Raid", 00:17:56.752 "aliases": [ 00:17:56.752 "dfbc83bd-8634-4eba-90d6-a7c6e913c969" 00:17:56.752 ], 00:17:56.752 "product_name": "Raid Volume", 00:17:56.752 "block_size": 4096, 00:17:56.752 "num_blocks": 7936, 00:17:56.752 "uuid": "dfbc83bd-8634-4eba-90d6-a7c6e913c969", 00:17:56.752 "md_size": 32, 00:17:56.752 "md_interleave": false, 00:17:56.752 "dif_type": 0, 00:17:56.752 "assigned_rate_limits": { 00:17:56.752 "rw_ios_per_sec": 0, 00:17:56.752 "rw_mbytes_per_sec": 0, 00:17:56.752 "r_mbytes_per_sec": 0, 00:17:56.752 "w_mbytes_per_sec": 0 00:17:56.752 }, 00:17:56.752 "claimed": false, 00:17:56.752 "zoned": false, 00:17:56.752 "supported_io_types": { 00:17:56.752 "read": true, 00:17:56.752 "write": true, 00:17:56.752 "unmap": false, 00:17:56.752 "flush": false, 00:17:56.752 "reset": true, 00:17:56.752 "nvme_admin": false, 00:17:56.752 "nvme_io": false, 00:17:56.752 "nvme_io_md": false, 00:17:56.752 "write_zeroes": true, 00:17:56.752 "zcopy": false, 00:17:56.753 "get_zone_info": false, 00:17:56.753 "zone_management": false, 00:17:56.753 "zone_append": false, 00:17:56.753 "compare": false, 00:17:56.753 "compare_and_write": false, 00:17:56.753 "abort": false, 00:17:56.753 "seek_hole": false, 00:17:56.753 "seek_data": false, 00:17:56.753 "copy": false, 00:17:56.753 "nvme_iov_md": false 00:17:56.753 }, 00:17:56.753 "memory_domains": [ 00:17:56.753 { 00:17:56.753 "dma_device_id": "system", 00:17:56.753 "dma_device_type": 1 00:17:56.753 }, 00:17:56.753 { 00:17:56.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.753 "dma_device_type": 2 00:17:56.753 }, 00:17:56.753 { 00:17:56.753 "dma_device_id": "system", 00:17:56.753 "dma_device_type": 1 00:17:56.753 }, 00:17:56.753 { 00:17:56.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.753 "dma_device_type": 2 00:17:56.753 } 00:17:56.753 ], 00:17:56.753 "driver_specific": { 00:17:56.753 "raid": { 00:17:56.753 "uuid": "dfbc83bd-8634-4eba-90d6-a7c6e913c969", 00:17:56.753 "strip_size_kb": 0, 00:17:56.753 "state": "online", 00:17:56.753 "raid_level": "raid1", 00:17:56.753 "superblock": true, 00:17:56.753 "num_base_bdevs": 2, 00:17:56.753 "num_base_bdevs_discovered": 2, 00:17:56.753 "num_base_bdevs_operational": 2, 00:17:56.753 "base_bdevs_list": [ 00:17:56.753 { 00:17:56.753 "name": "BaseBdev1", 00:17:56.753 "uuid": "0753404a-5752-4b0c-9fac-bd368d6c98c9", 00:17:56.753 "is_configured": true, 00:17:56.753 "data_offset": 256, 00:17:56.753 "data_size": 7936 00:17:56.753 }, 00:17:56.753 { 00:17:56.753 "name": "BaseBdev2", 00:17:56.753 "uuid": "0f3d944d-96bb-4370-baf6-85e67e8c621e", 00:17:56.753 "is_configured": true, 00:17:56.753 "data_offset": 256, 00:17:56.753 "data_size": 7936 00:17:56.753 } 00:17:56.753 ] 00:17:56.753 } 00:17:56.753 } 00:17:56.753 }' 00:17:56.753 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:57.013 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:57.013 BaseBdev2' 00:17:57.013 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.013 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:57.013 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.013 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:57.013 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.013 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.013 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.013 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.013 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:57.013 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:57.013 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.013 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:57.013 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.013 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.013 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.013 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.013 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:57.013 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:57.013 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:57.013 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.013 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.013 [2024-11-15 11:03:03.888697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:57.276 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.276 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:57.276 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:57.276 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:57.276 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:57.276 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:57.276 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:57.277 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:57.277 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.277 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.277 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.277 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:57.277 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.277 11:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.277 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.277 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.277 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.277 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.277 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.277 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.277 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.277 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.277 "name": "Existed_Raid", 00:17:57.277 "uuid": "dfbc83bd-8634-4eba-90d6-a7c6e913c969", 00:17:57.277 "strip_size_kb": 0, 00:17:57.277 "state": "online", 00:17:57.277 "raid_level": "raid1", 00:17:57.277 "superblock": true, 00:17:57.277 "num_base_bdevs": 2, 00:17:57.277 "num_base_bdevs_discovered": 1, 00:17:57.277 "num_base_bdevs_operational": 1, 00:17:57.277 "base_bdevs_list": [ 00:17:57.277 { 00:17:57.277 "name": null, 00:17:57.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.277 "is_configured": false, 00:17:57.277 "data_offset": 0, 00:17:57.277 "data_size": 7936 00:17:57.277 }, 00:17:57.277 { 00:17:57.277 "name": "BaseBdev2", 00:17:57.277 "uuid": "0f3d944d-96bb-4370-baf6-85e67e8c621e", 00:17:57.277 "is_configured": true, 00:17:57.277 "data_offset": 256, 00:17:57.277 "data_size": 7936 00:17:57.277 } 00:17:57.277 ] 00:17:57.277 }' 00:17:57.277 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.277 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.536 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:57.536 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:57.536 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:57.536 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.536 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.536 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.536 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.536 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:57.536 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:57.536 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:57.536 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.536 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.795 [2024-11-15 11:03:04.463336] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:57.795 [2024-11-15 11:03:04.463441] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:57.795 [2024-11-15 11:03:04.564211] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:57.796 [2024-11-15 11:03:04.564264] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:57.796 [2024-11-15 11:03:04.564275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:57.796 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.796 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:57.796 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:57.796 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.796 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:57.796 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.796 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.796 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.796 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:57.796 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:57.796 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:57.796 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87378 00:17:57.796 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 87378 ']' 00:17:57.796 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 87378 00:17:57.796 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:17:57.796 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:57.796 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87378 00:17:57.796 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:57.796 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:57.796 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87378' 00:17:57.796 killing process with pid 87378 00:17:57.796 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 87378 00:17:57.796 [2024-11-15 11:03:04.665165] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:57.796 11:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 87378 00:17:57.796 [2024-11-15 11:03:04.682283] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:59.174 11:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:17:59.174 00:17:59.174 real 0m5.024s 00:17:59.174 user 0m7.226s 00:17:59.174 sys 0m0.859s 00:17:59.174 11:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:59.174 11:03:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.174 ************************************ 00:17:59.174 END TEST raid_state_function_test_sb_md_separate 00:17:59.174 ************************************ 00:17:59.174 11:03:05 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:59.174 11:03:05 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:59.174 11:03:05 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:59.174 11:03:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:59.174 ************************************ 00:17:59.174 START TEST raid_superblock_test_md_separate 00:17:59.174 ************************************ 00:17:59.174 11:03:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:17:59.174 11:03:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:59.174 11:03:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:59.174 11:03:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:59.174 11:03:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:59.174 11:03:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:59.174 11:03:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:59.174 11:03:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:59.174 11:03:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:59.174 11:03:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:59.174 11:03:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:59.174 11:03:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:59.174 11:03:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:59.174 11:03:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:59.174 11:03:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:59.174 11:03:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:59.174 11:03:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87625 00:17:59.174 11:03:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:59.174 11:03:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87625 00:17:59.174 11:03:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # '[' -z 87625 ']' 00:17:59.175 11:03:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.175 11:03:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:59.175 11:03:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.175 11:03:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:59.175 11:03:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.175 [2024-11-15 11:03:05.926888] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:17:59.175 [2024-11-15 11:03:05.927105] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87625 ] 00:17:59.433 [2024-11-15 11:03:06.101602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.433 [2024-11-15 11:03:06.220465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.691 [2024-11-15 11:03:06.422787] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:59.691 [2024-11-15 11:03:06.422853] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@866 -- # return 0 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.949 malloc1 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.949 [2024-11-15 11:03:06.816555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:59.949 [2024-11-15 11:03:06.816673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.949 [2024-11-15 11:03:06.816726] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:59.949 [2024-11-15 11:03:06.816757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.949 [2024-11-15 11:03:06.818763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.949 [2024-11-15 11:03:06.818843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:59.949 pt1 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.949 malloc2 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.949 11:03:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.207 [2024-11-15 11:03:06.877140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:00.207 [2024-11-15 11:03:06.877259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.207 [2024-11-15 11:03:06.877286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:00.207 [2024-11-15 11:03:06.877295] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.207 [2024-11-15 11:03:06.879269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.207 [2024-11-15 11:03:06.879314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:00.207 pt2 00:18:00.207 11:03:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.207 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:00.207 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:00.207 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:00.207 11:03:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.207 11:03:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.207 [2024-11-15 11:03:06.889150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:00.207 [2024-11-15 11:03:06.890988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:00.207 [2024-11-15 11:03:06.891184] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:00.207 [2024-11-15 11:03:06.891199] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:00.207 [2024-11-15 11:03:06.891286] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:00.207 [2024-11-15 11:03:06.891431] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:00.207 [2024-11-15 11:03:06.891444] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:00.207 [2024-11-15 11:03:06.891554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.207 11:03:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.207 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:00.207 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.207 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.207 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.207 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.207 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:00.207 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.207 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.208 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.208 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.208 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.208 11:03:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.208 11:03:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.208 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.208 11:03:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.208 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.208 "name": "raid_bdev1", 00:18:00.208 "uuid": "148b907d-af93-485b-b96b-b1ad0c077bd1", 00:18:00.208 "strip_size_kb": 0, 00:18:00.208 "state": "online", 00:18:00.208 "raid_level": "raid1", 00:18:00.208 "superblock": true, 00:18:00.208 "num_base_bdevs": 2, 00:18:00.208 "num_base_bdevs_discovered": 2, 00:18:00.208 "num_base_bdevs_operational": 2, 00:18:00.208 "base_bdevs_list": [ 00:18:00.208 { 00:18:00.208 "name": "pt1", 00:18:00.208 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:00.208 "is_configured": true, 00:18:00.208 "data_offset": 256, 00:18:00.208 "data_size": 7936 00:18:00.208 }, 00:18:00.208 { 00:18:00.208 "name": "pt2", 00:18:00.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.208 "is_configured": true, 00:18:00.208 "data_offset": 256, 00:18:00.208 "data_size": 7936 00:18:00.208 } 00:18:00.208 ] 00:18:00.208 }' 00:18:00.208 11:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.208 11:03:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.466 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:00.466 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:00.466 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:00.466 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:00.466 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:00.466 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:00.466 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:00.466 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:00.466 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.466 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.466 [2024-11-15 11:03:07.376721] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:00.725 "name": "raid_bdev1", 00:18:00.725 "aliases": [ 00:18:00.725 "148b907d-af93-485b-b96b-b1ad0c077bd1" 00:18:00.725 ], 00:18:00.725 "product_name": "Raid Volume", 00:18:00.725 "block_size": 4096, 00:18:00.725 "num_blocks": 7936, 00:18:00.725 "uuid": "148b907d-af93-485b-b96b-b1ad0c077bd1", 00:18:00.725 "md_size": 32, 00:18:00.725 "md_interleave": false, 00:18:00.725 "dif_type": 0, 00:18:00.725 "assigned_rate_limits": { 00:18:00.725 "rw_ios_per_sec": 0, 00:18:00.725 "rw_mbytes_per_sec": 0, 00:18:00.725 "r_mbytes_per_sec": 0, 00:18:00.725 "w_mbytes_per_sec": 0 00:18:00.725 }, 00:18:00.725 "claimed": false, 00:18:00.725 "zoned": false, 00:18:00.725 "supported_io_types": { 00:18:00.725 "read": true, 00:18:00.725 "write": true, 00:18:00.725 "unmap": false, 00:18:00.725 "flush": false, 00:18:00.725 "reset": true, 00:18:00.725 "nvme_admin": false, 00:18:00.725 "nvme_io": false, 00:18:00.725 "nvme_io_md": false, 00:18:00.725 "write_zeroes": true, 00:18:00.725 "zcopy": false, 00:18:00.725 "get_zone_info": false, 00:18:00.725 "zone_management": false, 00:18:00.725 "zone_append": false, 00:18:00.725 "compare": false, 00:18:00.725 "compare_and_write": false, 00:18:00.725 "abort": false, 00:18:00.725 "seek_hole": false, 00:18:00.725 "seek_data": false, 00:18:00.725 "copy": false, 00:18:00.725 "nvme_iov_md": false 00:18:00.725 }, 00:18:00.725 "memory_domains": [ 00:18:00.725 { 00:18:00.725 "dma_device_id": "system", 00:18:00.725 "dma_device_type": 1 00:18:00.725 }, 00:18:00.725 { 00:18:00.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.725 "dma_device_type": 2 00:18:00.725 }, 00:18:00.725 { 00:18:00.725 "dma_device_id": "system", 00:18:00.725 "dma_device_type": 1 00:18:00.725 }, 00:18:00.725 { 00:18:00.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.725 "dma_device_type": 2 00:18:00.725 } 00:18:00.725 ], 00:18:00.725 "driver_specific": { 00:18:00.725 "raid": { 00:18:00.725 "uuid": "148b907d-af93-485b-b96b-b1ad0c077bd1", 00:18:00.725 "strip_size_kb": 0, 00:18:00.725 "state": "online", 00:18:00.725 "raid_level": "raid1", 00:18:00.725 "superblock": true, 00:18:00.725 "num_base_bdevs": 2, 00:18:00.725 "num_base_bdevs_discovered": 2, 00:18:00.725 "num_base_bdevs_operational": 2, 00:18:00.725 "base_bdevs_list": [ 00:18:00.725 { 00:18:00.725 "name": "pt1", 00:18:00.725 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:00.725 "is_configured": true, 00:18:00.725 "data_offset": 256, 00:18:00.725 "data_size": 7936 00:18:00.725 }, 00:18:00.725 { 00:18:00.725 "name": "pt2", 00:18:00.725 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.725 "is_configured": true, 00:18:00.725 "data_offset": 256, 00:18:00.725 "data_size": 7936 00:18:00.725 } 00:18:00.725 ] 00:18:00.725 } 00:18:00.725 } 00:18:00.725 }' 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:00.725 pt2' 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.725 [2024-11-15 11:03:07.588256] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=148b907d-af93-485b-b96b-b1ad0c077bd1 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 148b907d-af93-485b-b96b-b1ad0c077bd1 ']' 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.725 [2024-11-15 11:03:07.631899] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:00.725 [2024-11-15 11:03:07.631978] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:00.725 [2024-11-15 11:03:07.632099] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.725 [2024-11-15 11:03:07.632183] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:00.725 [2024-11-15 11:03:07.632230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.725 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.985 [2024-11-15 11:03:07.775679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:00.985 [2024-11-15 11:03:07.777626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:00.985 [2024-11-15 11:03:07.777771] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:00.985 [2024-11-15 11:03:07.777897] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:00.985 [2024-11-15 11:03:07.777956] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:00.985 [2024-11-15 11:03:07.777997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:00.985 request: 00:18:00.985 { 00:18:00.985 "name": "raid_bdev1", 00:18:00.985 "raid_level": "raid1", 00:18:00.985 "base_bdevs": [ 00:18:00.985 "malloc1", 00:18:00.985 "malloc2" 00:18:00.985 ], 00:18:00.985 "superblock": false, 00:18:00.985 "method": "bdev_raid_create", 00:18:00.985 "req_id": 1 00:18:00.985 } 00:18:00.985 Got JSON-RPC error response 00:18:00.985 response: 00:18:00.985 { 00:18:00.985 "code": -17, 00:18:00.985 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:00.985 } 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:00.985 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.986 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:00.986 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.986 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.986 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.986 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:00.986 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:00.986 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:00.986 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.986 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.986 [2024-11-15 11:03:07.843524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:00.986 [2024-11-15 11:03:07.843633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.986 [2024-11-15 11:03:07.843683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:00.986 [2024-11-15 11:03:07.843714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.986 [2024-11-15 11:03:07.845729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.986 [2024-11-15 11:03:07.845806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:00.986 [2024-11-15 11:03:07.845884] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:00.986 [2024-11-15 11:03:07.845958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:00.986 pt1 00:18:00.986 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.986 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:00.986 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.986 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:00.986 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.986 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.986 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:00.986 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.986 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.986 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.986 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.986 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.986 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.986 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.986 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.986 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.986 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.986 "name": "raid_bdev1", 00:18:00.986 "uuid": "148b907d-af93-485b-b96b-b1ad0c077bd1", 00:18:00.986 "strip_size_kb": 0, 00:18:00.986 "state": "configuring", 00:18:00.986 "raid_level": "raid1", 00:18:00.986 "superblock": true, 00:18:00.986 "num_base_bdevs": 2, 00:18:00.986 "num_base_bdevs_discovered": 1, 00:18:00.986 "num_base_bdevs_operational": 2, 00:18:00.986 "base_bdevs_list": [ 00:18:00.986 { 00:18:00.986 "name": "pt1", 00:18:00.986 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:00.986 "is_configured": true, 00:18:00.986 "data_offset": 256, 00:18:00.986 "data_size": 7936 00:18:00.986 }, 00:18:00.986 { 00:18:00.986 "name": null, 00:18:00.986 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.986 "is_configured": false, 00:18:00.986 "data_offset": 256, 00:18:00.986 "data_size": 7936 00:18:00.986 } 00:18:00.986 ] 00:18:00.986 }' 00:18:00.986 11:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.986 11:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.552 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:01.552 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:01.552 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:01.552 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:01.552 11:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.553 11:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.553 [2024-11-15 11:03:08.274798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:01.553 [2024-11-15 11:03:08.274887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.553 [2024-11-15 11:03:08.274909] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:01.553 [2024-11-15 11:03:08.274922] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.553 [2024-11-15 11:03:08.275171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.553 [2024-11-15 11:03:08.275198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:01.553 [2024-11-15 11:03:08.275255] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:01.553 [2024-11-15 11:03:08.275286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:01.553 [2024-11-15 11:03:08.275429] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:01.553 [2024-11-15 11:03:08.275450] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:01.553 [2024-11-15 11:03:08.275527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:01.553 [2024-11-15 11:03:08.275672] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:01.553 [2024-11-15 11:03:08.275686] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:01.553 [2024-11-15 11:03:08.275794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.553 pt2 00:18:01.553 11:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.553 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:01.553 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:01.553 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:01.553 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.553 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.553 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.553 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.553 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:01.553 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.553 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.553 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.553 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.553 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.553 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.553 11:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.553 11:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.553 11:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.553 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.553 "name": "raid_bdev1", 00:18:01.553 "uuid": "148b907d-af93-485b-b96b-b1ad0c077bd1", 00:18:01.553 "strip_size_kb": 0, 00:18:01.553 "state": "online", 00:18:01.553 "raid_level": "raid1", 00:18:01.553 "superblock": true, 00:18:01.553 "num_base_bdevs": 2, 00:18:01.553 "num_base_bdevs_discovered": 2, 00:18:01.553 "num_base_bdevs_operational": 2, 00:18:01.553 "base_bdevs_list": [ 00:18:01.553 { 00:18:01.553 "name": "pt1", 00:18:01.553 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:01.553 "is_configured": true, 00:18:01.553 "data_offset": 256, 00:18:01.553 "data_size": 7936 00:18:01.553 }, 00:18:01.553 { 00:18:01.553 "name": "pt2", 00:18:01.553 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.553 "is_configured": true, 00:18:01.553 "data_offset": 256, 00:18:01.553 "data_size": 7936 00:18:01.553 } 00:18:01.553 ] 00:18:01.553 }' 00:18:01.553 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.553 11:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.833 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:01.833 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:01.833 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:01.833 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:01.833 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:01.833 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:01.833 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:01.833 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:01.833 11:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.833 11:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.833 [2024-11-15 11:03:08.738306] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:02.112 "name": "raid_bdev1", 00:18:02.112 "aliases": [ 00:18:02.112 "148b907d-af93-485b-b96b-b1ad0c077bd1" 00:18:02.112 ], 00:18:02.112 "product_name": "Raid Volume", 00:18:02.112 "block_size": 4096, 00:18:02.112 "num_blocks": 7936, 00:18:02.112 "uuid": "148b907d-af93-485b-b96b-b1ad0c077bd1", 00:18:02.112 "md_size": 32, 00:18:02.112 "md_interleave": false, 00:18:02.112 "dif_type": 0, 00:18:02.112 "assigned_rate_limits": { 00:18:02.112 "rw_ios_per_sec": 0, 00:18:02.112 "rw_mbytes_per_sec": 0, 00:18:02.112 "r_mbytes_per_sec": 0, 00:18:02.112 "w_mbytes_per_sec": 0 00:18:02.112 }, 00:18:02.112 "claimed": false, 00:18:02.112 "zoned": false, 00:18:02.112 "supported_io_types": { 00:18:02.112 "read": true, 00:18:02.112 "write": true, 00:18:02.112 "unmap": false, 00:18:02.112 "flush": false, 00:18:02.112 "reset": true, 00:18:02.112 "nvme_admin": false, 00:18:02.112 "nvme_io": false, 00:18:02.112 "nvme_io_md": false, 00:18:02.112 "write_zeroes": true, 00:18:02.112 "zcopy": false, 00:18:02.112 "get_zone_info": false, 00:18:02.112 "zone_management": false, 00:18:02.112 "zone_append": false, 00:18:02.112 "compare": false, 00:18:02.112 "compare_and_write": false, 00:18:02.112 "abort": false, 00:18:02.112 "seek_hole": false, 00:18:02.112 "seek_data": false, 00:18:02.112 "copy": false, 00:18:02.112 "nvme_iov_md": false 00:18:02.112 }, 00:18:02.112 "memory_domains": [ 00:18:02.112 { 00:18:02.112 "dma_device_id": "system", 00:18:02.112 "dma_device_type": 1 00:18:02.112 }, 00:18:02.112 { 00:18:02.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.112 "dma_device_type": 2 00:18:02.112 }, 00:18:02.112 { 00:18:02.112 "dma_device_id": "system", 00:18:02.112 "dma_device_type": 1 00:18:02.112 }, 00:18:02.112 { 00:18:02.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.112 "dma_device_type": 2 00:18:02.112 } 00:18:02.112 ], 00:18:02.112 "driver_specific": { 00:18:02.112 "raid": { 00:18:02.112 "uuid": "148b907d-af93-485b-b96b-b1ad0c077bd1", 00:18:02.112 "strip_size_kb": 0, 00:18:02.112 "state": "online", 00:18:02.112 "raid_level": "raid1", 00:18:02.112 "superblock": true, 00:18:02.112 "num_base_bdevs": 2, 00:18:02.112 "num_base_bdevs_discovered": 2, 00:18:02.112 "num_base_bdevs_operational": 2, 00:18:02.112 "base_bdevs_list": [ 00:18:02.112 { 00:18:02.112 "name": "pt1", 00:18:02.112 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:02.112 "is_configured": true, 00:18:02.112 "data_offset": 256, 00:18:02.112 "data_size": 7936 00:18:02.112 }, 00:18:02.112 { 00:18:02.112 "name": "pt2", 00:18:02.112 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.112 "is_configured": true, 00:18:02.112 "data_offset": 256, 00:18:02.112 "data_size": 7936 00:18:02.112 } 00:18:02.112 ] 00:18:02.112 } 00:18:02.112 } 00:18:02.112 }' 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:02.112 pt2' 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.112 [2024-11-15 11:03:08.946013] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 148b907d-af93-485b-b96b-b1ad0c077bd1 '!=' 148b907d-af93-485b-b96b-b1ad0c077bd1 ']' 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.112 [2024-11-15 11:03:08.993697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.112 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.113 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.113 11:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.113 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.113 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.113 11:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.113 11:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.113 11:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.371 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.371 "name": "raid_bdev1", 00:18:02.371 "uuid": "148b907d-af93-485b-b96b-b1ad0c077bd1", 00:18:02.371 "strip_size_kb": 0, 00:18:02.371 "state": "online", 00:18:02.371 "raid_level": "raid1", 00:18:02.371 "superblock": true, 00:18:02.371 "num_base_bdevs": 2, 00:18:02.371 "num_base_bdevs_discovered": 1, 00:18:02.371 "num_base_bdevs_operational": 1, 00:18:02.371 "base_bdevs_list": [ 00:18:02.371 { 00:18:02.371 "name": null, 00:18:02.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.371 "is_configured": false, 00:18:02.371 "data_offset": 0, 00:18:02.371 "data_size": 7936 00:18:02.371 }, 00:18:02.371 { 00:18:02.371 "name": "pt2", 00:18:02.371 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.371 "is_configured": true, 00:18:02.371 "data_offset": 256, 00:18:02.371 "data_size": 7936 00:18:02.371 } 00:18:02.371 ] 00:18:02.371 }' 00:18:02.371 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.371 11:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.630 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:02.630 11:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.630 11:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.630 [2024-11-15 11:03:09.444866] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:02.630 [2024-11-15 11:03:09.444903] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:02.630 [2024-11-15 11:03:09.444990] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.630 [2024-11-15 11:03:09.445038] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.630 [2024-11-15 11:03:09.445050] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:02.630 11:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.630 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.630 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:02.630 11:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.630 11:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.630 11:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.630 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:02.630 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:02.630 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:02.630 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:02.630 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:02.630 11:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.630 11:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.630 11:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.631 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:02.631 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:02.631 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:02.631 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:02.631 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:18:02.631 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:02.631 11:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.631 11:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.631 [2024-11-15 11:03:09.524747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:02.631 [2024-11-15 11:03:09.524829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.631 [2024-11-15 11:03:09.524849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:02.631 [2024-11-15 11:03:09.524861] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.631 [2024-11-15 11:03:09.527000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.631 [2024-11-15 11:03:09.527050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:02.631 [2024-11-15 11:03:09.527111] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:02.631 [2024-11-15 11:03:09.527163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:02.631 [2024-11-15 11:03:09.527271] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:02.631 [2024-11-15 11:03:09.527291] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:02.631 [2024-11-15 11:03:09.527392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:02.631 [2024-11-15 11:03:09.527523] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:02.631 [2024-11-15 11:03:09.527537] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:02.631 [2024-11-15 11:03:09.527641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.631 pt2 00:18:02.631 11:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.631 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:02.631 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.631 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.631 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.631 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.631 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:02.631 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.631 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.631 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.631 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.631 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.631 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.631 11:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.631 11:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.631 11:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.889 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.889 "name": "raid_bdev1", 00:18:02.889 "uuid": "148b907d-af93-485b-b96b-b1ad0c077bd1", 00:18:02.889 "strip_size_kb": 0, 00:18:02.889 "state": "online", 00:18:02.889 "raid_level": "raid1", 00:18:02.889 "superblock": true, 00:18:02.889 "num_base_bdevs": 2, 00:18:02.889 "num_base_bdevs_discovered": 1, 00:18:02.889 "num_base_bdevs_operational": 1, 00:18:02.889 "base_bdevs_list": [ 00:18:02.889 { 00:18:02.889 "name": null, 00:18:02.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.889 "is_configured": false, 00:18:02.889 "data_offset": 256, 00:18:02.889 "data_size": 7936 00:18:02.889 }, 00:18:02.889 { 00:18:02.889 "name": "pt2", 00:18:02.889 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.889 "is_configured": true, 00:18:02.889 "data_offset": 256, 00:18:02.889 "data_size": 7936 00:18:02.889 } 00:18:02.889 ] 00:18:02.889 }' 00:18:02.889 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.889 11:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.148 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:03.148 11:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.148 11:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.148 [2024-11-15 11:03:09.967966] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:03.148 [2024-11-15 11:03:09.968004] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:03.148 [2024-11-15 11:03:09.968086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.148 [2024-11-15 11:03:09.968137] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.148 [2024-11-15 11:03:09.968147] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:03.148 11:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.148 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:03.148 11:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.148 11:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.148 11:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.148 11:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.148 11:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:03.148 11:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:03.148 11:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:03.148 11:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:03.148 11:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.148 11:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.148 [2024-11-15 11:03:10.023943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:03.148 [2024-11-15 11:03:10.024029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.148 [2024-11-15 11:03:10.024067] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:03.148 [2024-11-15 11:03:10.024078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.148 [2024-11-15 11:03:10.026267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.148 [2024-11-15 11:03:10.026323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:03.148 [2024-11-15 11:03:10.026396] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:03.148 [2024-11-15 11:03:10.026471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:03.148 [2024-11-15 11:03:10.026653] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:03.148 [2024-11-15 11:03:10.026672] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:03.148 [2024-11-15 11:03:10.026693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:03.148 [2024-11-15 11:03:10.026783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:03.148 [2024-11-15 11:03:10.026878] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:03.148 [2024-11-15 11:03:10.026892] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:03.148 [2024-11-15 11:03:10.026981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:03.148 [2024-11-15 11:03:10.027105] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:03.148 [2024-11-15 11:03:10.027124] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:03.148 [2024-11-15 11:03:10.027247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.148 pt1 00:18:03.148 11:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.149 11:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:03.149 11:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:03.149 11:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.149 11:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.149 11:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.149 11:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.149 11:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:03.149 11:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.149 11:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.149 11:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.149 11:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.149 11:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.149 11:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.149 11:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.149 11:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.149 11:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.408 11:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.408 "name": "raid_bdev1", 00:18:03.408 "uuid": "148b907d-af93-485b-b96b-b1ad0c077bd1", 00:18:03.408 "strip_size_kb": 0, 00:18:03.408 "state": "online", 00:18:03.408 "raid_level": "raid1", 00:18:03.408 "superblock": true, 00:18:03.408 "num_base_bdevs": 2, 00:18:03.408 "num_base_bdevs_discovered": 1, 00:18:03.408 "num_base_bdevs_operational": 1, 00:18:03.408 "base_bdevs_list": [ 00:18:03.408 { 00:18:03.408 "name": null, 00:18:03.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.408 "is_configured": false, 00:18:03.408 "data_offset": 256, 00:18:03.408 "data_size": 7936 00:18:03.408 }, 00:18:03.408 { 00:18:03.408 "name": "pt2", 00:18:03.408 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.408 "is_configured": true, 00:18:03.408 "data_offset": 256, 00:18:03.408 "data_size": 7936 00:18:03.408 } 00:18:03.408 ] 00:18:03.408 }' 00:18:03.408 11:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.408 11:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.667 11:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:03.667 11:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:03.667 11:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.667 11:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.667 11:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.667 11:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:03.667 11:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:03.667 11:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:03.667 11:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.667 11:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.667 [2024-11-15 11:03:10.555265] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:03.667 11:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.667 11:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 148b907d-af93-485b-b96b-b1ad0c077bd1 '!=' 148b907d-af93-485b-b96b-b1ad0c077bd1 ']' 00:18:03.667 11:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87625 00:18:03.667 11:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # '[' -z 87625 ']' 00:18:03.667 11:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # kill -0 87625 00:18:03.667 11:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # uname 00:18:03.667 11:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:03.667 11:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87625 00:18:03.926 killing process with pid 87625 00:18:03.926 11:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:03.926 11:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:03.926 11:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87625' 00:18:03.926 11:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@971 -- # kill 87625 00:18:03.926 [2024-11-15 11:03:10.621692] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:03.926 [2024-11-15 11:03:10.621802] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.926 11:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@976 -- # wait 87625 00:18:03.926 [2024-11-15 11:03:10.621854] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.926 [2024-11-15 11:03:10.621873] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:04.186 [2024-11-15 11:03:10.874783] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:05.566 11:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:18:05.566 00:18:05.566 real 0m6.231s 00:18:05.566 user 0m9.358s 00:18:05.566 sys 0m1.139s 00:18:05.566 11:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:05.566 11:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.566 ************************************ 00:18:05.566 END TEST raid_superblock_test_md_separate 00:18:05.566 ************************************ 00:18:05.566 11:03:12 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:18:05.566 11:03:12 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:18:05.566 11:03:12 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:18:05.566 11:03:12 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:05.566 11:03:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:05.566 ************************************ 00:18:05.566 START TEST raid_rebuild_test_sb_md_separate 00:18:05.566 ************************************ 00:18:05.566 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:18:05.566 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:05.566 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:05.566 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:05.566 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:05.566 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:05.566 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:05.566 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:05.566 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:05.566 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:05.566 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:05.567 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:05.567 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:05.567 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:05.567 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:05.567 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:05.567 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:05.567 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:05.567 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:05.567 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:05.567 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:05.567 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:05.567 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:05.567 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:05.567 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:05.567 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87953 00:18:05.567 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87953 00:18:05.567 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:05.567 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 87953 ']' 00:18:05.567 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.567 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:05.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.567 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.567 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:05.567 11:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.567 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:05.567 Zero copy mechanism will not be used. 00:18:05.567 [2024-11-15 11:03:12.227931] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:18:05.567 [2024-11-15 11:03:12.228078] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87953 ] 00:18:05.567 [2024-11-15 11:03:12.391154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.826 [2024-11-15 11:03:12.518097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.826 [2024-11-15 11:03:12.734701] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:05.826 [2024-11-15 11:03:12.734758] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.434 BaseBdev1_malloc 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.434 [2024-11-15 11:03:13.168227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:06.434 [2024-11-15 11:03:13.168314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.434 [2024-11-15 11:03:13.168341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:06.434 [2024-11-15 11:03:13.168354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.434 [2024-11-15 11:03:13.170618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.434 [2024-11-15 11:03:13.170675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:06.434 BaseBdev1 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.434 BaseBdev2_malloc 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.434 [2024-11-15 11:03:13.227088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:06.434 [2024-11-15 11:03:13.227178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.434 [2024-11-15 11:03:13.227201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:06.434 [2024-11-15 11:03:13.227213] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.434 [2024-11-15 11:03:13.229500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.434 [2024-11-15 11:03:13.229545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:06.434 BaseBdev2 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.434 spare_malloc 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.434 spare_delay 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.434 [2024-11-15 11:03:13.315060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:06.434 [2024-11-15 11:03:13.315144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.434 [2024-11-15 11:03:13.315172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:06.434 [2024-11-15 11:03:13.315184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.434 [2024-11-15 11:03:13.317508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.434 [2024-11-15 11:03:13.317556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:06.434 spare 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.434 [2024-11-15 11:03:13.327096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:06.434 [2024-11-15 11:03:13.329257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:06.434 [2024-11-15 11:03:13.329508] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:06.434 [2024-11-15 11:03:13.329534] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:06.434 [2024-11-15 11:03:13.329645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:06.434 [2024-11-15 11:03:13.329821] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:06.434 [2024-11-15 11:03:13.329838] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:06.434 [2024-11-15 11:03:13.329968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:06.434 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.435 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.435 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.435 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.435 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:06.435 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.435 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.435 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.435 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.435 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.435 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.435 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.435 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.435 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.694 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.694 "name": "raid_bdev1", 00:18:06.694 "uuid": "c0f9bf3e-e135-468b-96ec-bc0761a0780a", 00:18:06.694 "strip_size_kb": 0, 00:18:06.694 "state": "online", 00:18:06.694 "raid_level": "raid1", 00:18:06.694 "superblock": true, 00:18:06.694 "num_base_bdevs": 2, 00:18:06.694 "num_base_bdevs_discovered": 2, 00:18:06.694 "num_base_bdevs_operational": 2, 00:18:06.694 "base_bdevs_list": [ 00:18:06.694 { 00:18:06.694 "name": "BaseBdev1", 00:18:06.694 "uuid": "9580cf0d-c0d8-5662-8122-b24c2f6a61b4", 00:18:06.694 "is_configured": true, 00:18:06.694 "data_offset": 256, 00:18:06.694 "data_size": 7936 00:18:06.694 }, 00:18:06.694 { 00:18:06.694 "name": "BaseBdev2", 00:18:06.694 "uuid": "df120221-bcb6-52cb-a036-6b66aa8ea67b", 00:18:06.694 "is_configured": true, 00:18:06.694 "data_offset": 256, 00:18:06.694 "data_size": 7936 00:18:06.694 } 00:18:06.694 ] 00:18:06.694 }' 00:18:06.694 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.694 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.954 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:06.954 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.954 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.954 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:06.954 [2024-11-15 11:03:13.810616] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:06.954 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.954 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:06.954 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.954 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.954 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.954 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:06.954 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.214 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:07.214 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:07.214 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:07.214 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:07.214 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:07.214 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:07.214 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:07.214 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:07.214 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:07.214 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:07.214 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:07.214 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:07.214 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:07.214 11:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:07.214 [2024-11-15 11:03:14.125819] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:07.214 /dev/nbd0 00:18:07.475 11:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:07.475 11:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:07.475 11:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:07.475 11:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:18:07.475 11:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:07.475 11:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:07.475 11:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:07.475 11:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:18:07.475 11:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:07.475 11:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:07.475 11:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:07.475 1+0 records in 00:18:07.475 1+0 records out 00:18:07.475 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342872 s, 11.9 MB/s 00:18:07.475 11:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.475 11:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:18:07.475 11:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.475 11:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:07.475 11:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:18:07.475 11:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:07.475 11:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:07.475 11:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:07.475 11:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:07.475 11:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:08.051 7936+0 records in 00:18:08.051 7936+0 records out 00:18:08.051 32505856 bytes (33 MB, 31 MiB) copied, 0.715536 s, 45.4 MB/s 00:18:08.051 11:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:08.051 11:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:08.051 11:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:08.051 11:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:08.051 11:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:08.052 11:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:08.052 11:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:08.315 [2024-11-15 11:03:15.158564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.315 [2024-11-15 11:03:15.174678] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.315 "name": "raid_bdev1", 00:18:08.315 "uuid": "c0f9bf3e-e135-468b-96ec-bc0761a0780a", 00:18:08.315 "strip_size_kb": 0, 00:18:08.315 "state": "online", 00:18:08.315 "raid_level": "raid1", 00:18:08.315 "superblock": true, 00:18:08.315 "num_base_bdevs": 2, 00:18:08.315 "num_base_bdevs_discovered": 1, 00:18:08.315 "num_base_bdevs_operational": 1, 00:18:08.315 "base_bdevs_list": [ 00:18:08.315 { 00:18:08.315 "name": null, 00:18:08.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.315 "is_configured": false, 00:18:08.315 "data_offset": 0, 00:18:08.315 "data_size": 7936 00:18:08.315 }, 00:18:08.315 { 00:18:08.315 "name": "BaseBdev2", 00:18:08.315 "uuid": "df120221-bcb6-52cb-a036-6b66aa8ea67b", 00:18:08.315 "is_configured": true, 00:18:08.315 "data_offset": 256, 00:18:08.315 "data_size": 7936 00:18:08.315 } 00:18:08.315 ] 00:18:08.315 }' 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.315 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.886 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:08.886 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.886 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.886 [2024-11-15 11:03:15.645919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:08.886 [2024-11-15 11:03:15.661166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:08.886 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.886 11:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:08.886 [2024-11-15 11:03:15.663250] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:09.825 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:09.825 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.825 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:09.825 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:09.825 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.825 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.825 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.825 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.825 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.825 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.825 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.825 "name": "raid_bdev1", 00:18:09.825 "uuid": "c0f9bf3e-e135-468b-96ec-bc0761a0780a", 00:18:09.825 "strip_size_kb": 0, 00:18:09.825 "state": "online", 00:18:09.825 "raid_level": "raid1", 00:18:09.825 "superblock": true, 00:18:09.825 "num_base_bdevs": 2, 00:18:09.825 "num_base_bdevs_discovered": 2, 00:18:09.825 "num_base_bdevs_operational": 2, 00:18:09.825 "process": { 00:18:09.825 "type": "rebuild", 00:18:09.825 "target": "spare", 00:18:09.825 "progress": { 00:18:09.825 "blocks": 2560, 00:18:09.825 "percent": 32 00:18:09.825 } 00:18:09.825 }, 00:18:09.825 "base_bdevs_list": [ 00:18:09.825 { 00:18:09.825 "name": "spare", 00:18:09.825 "uuid": "c84d92cd-f49f-5c2a-a709-03b0cbc95cba", 00:18:09.825 "is_configured": true, 00:18:09.825 "data_offset": 256, 00:18:09.825 "data_size": 7936 00:18:09.825 }, 00:18:09.825 { 00:18:09.825 "name": "BaseBdev2", 00:18:09.825 "uuid": "df120221-bcb6-52cb-a036-6b66aa8ea67b", 00:18:09.825 "is_configured": true, 00:18:09.825 "data_offset": 256, 00:18:09.825 "data_size": 7936 00:18:09.825 } 00:18:09.825 ] 00:18:09.825 }' 00:18:09.825 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.084 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:10.084 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.084 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:10.084 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:10.084 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.084 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.084 [2024-11-15 11:03:16.790504] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:10.084 [2024-11-15 11:03:16.869662] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:10.084 [2024-11-15 11:03:16.869758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.084 [2024-11-15 11:03:16.869776] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:10.084 [2024-11-15 11:03:16.869787] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:10.084 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.084 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:10.084 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.084 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.084 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.084 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.084 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:10.084 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.084 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.084 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.084 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.084 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.084 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.084 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.084 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.084 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.084 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.084 "name": "raid_bdev1", 00:18:10.084 "uuid": "c0f9bf3e-e135-468b-96ec-bc0761a0780a", 00:18:10.084 "strip_size_kb": 0, 00:18:10.084 "state": "online", 00:18:10.084 "raid_level": "raid1", 00:18:10.084 "superblock": true, 00:18:10.084 "num_base_bdevs": 2, 00:18:10.084 "num_base_bdevs_discovered": 1, 00:18:10.084 "num_base_bdevs_operational": 1, 00:18:10.084 "base_bdevs_list": [ 00:18:10.084 { 00:18:10.084 "name": null, 00:18:10.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.084 "is_configured": false, 00:18:10.084 "data_offset": 0, 00:18:10.084 "data_size": 7936 00:18:10.084 }, 00:18:10.084 { 00:18:10.084 "name": "BaseBdev2", 00:18:10.084 "uuid": "df120221-bcb6-52cb-a036-6b66aa8ea67b", 00:18:10.084 "is_configured": true, 00:18:10.084 "data_offset": 256, 00:18:10.084 "data_size": 7936 00:18:10.084 } 00:18:10.084 ] 00:18:10.084 }' 00:18:10.084 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.084 11:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.651 11:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:10.651 11:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.651 11:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:10.651 11:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:10.651 11:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.651 11:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.651 11:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.651 11:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.651 11:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.651 11:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.651 11:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.651 "name": "raid_bdev1", 00:18:10.651 "uuid": "c0f9bf3e-e135-468b-96ec-bc0761a0780a", 00:18:10.651 "strip_size_kb": 0, 00:18:10.651 "state": "online", 00:18:10.651 "raid_level": "raid1", 00:18:10.651 "superblock": true, 00:18:10.651 "num_base_bdevs": 2, 00:18:10.651 "num_base_bdevs_discovered": 1, 00:18:10.651 "num_base_bdevs_operational": 1, 00:18:10.651 "base_bdevs_list": [ 00:18:10.651 { 00:18:10.651 "name": null, 00:18:10.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.651 "is_configured": false, 00:18:10.651 "data_offset": 0, 00:18:10.651 "data_size": 7936 00:18:10.651 }, 00:18:10.652 { 00:18:10.652 "name": "BaseBdev2", 00:18:10.652 "uuid": "df120221-bcb6-52cb-a036-6b66aa8ea67b", 00:18:10.652 "is_configured": true, 00:18:10.652 "data_offset": 256, 00:18:10.652 "data_size": 7936 00:18:10.652 } 00:18:10.652 ] 00:18:10.652 }' 00:18:10.652 11:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.652 11:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:10.652 11:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.652 11:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:10.652 11:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:10.652 11:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.652 11:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.652 [2024-11-15 11:03:17.518715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:10.652 [2024-11-15 11:03:17.535107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:10.652 11:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.652 11:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:10.652 [2024-11-15 11:03:17.537311] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:12.032 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:12.032 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.032 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:12.032 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:12.032 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.032 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.032 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.032 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.032 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.032 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.032 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.032 "name": "raid_bdev1", 00:18:12.032 "uuid": "c0f9bf3e-e135-468b-96ec-bc0761a0780a", 00:18:12.032 "strip_size_kb": 0, 00:18:12.032 "state": "online", 00:18:12.032 "raid_level": "raid1", 00:18:12.032 "superblock": true, 00:18:12.032 "num_base_bdevs": 2, 00:18:12.032 "num_base_bdevs_discovered": 2, 00:18:12.032 "num_base_bdevs_operational": 2, 00:18:12.032 "process": { 00:18:12.032 "type": "rebuild", 00:18:12.032 "target": "spare", 00:18:12.032 "progress": { 00:18:12.032 "blocks": 2560, 00:18:12.032 "percent": 32 00:18:12.032 } 00:18:12.032 }, 00:18:12.032 "base_bdevs_list": [ 00:18:12.032 { 00:18:12.032 "name": "spare", 00:18:12.032 "uuid": "c84d92cd-f49f-5c2a-a709-03b0cbc95cba", 00:18:12.032 "is_configured": true, 00:18:12.032 "data_offset": 256, 00:18:12.032 "data_size": 7936 00:18:12.032 }, 00:18:12.032 { 00:18:12.032 "name": "BaseBdev2", 00:18:12.032 "uuid": "df120221-bcb6-52cb-a036-6b66aa8ea67b", 00:18:12.032 "is_configured": true, 00:18:12.032 "data_offset": 256, 00:18:12.032 "data_size": 7936 00:18:12.032 } 00:18:12.032 ] 00:18:12.032 }' 00:18:12.032 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.032 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:12.032 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.032 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:12.033 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:12.033 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:12.033 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:12.033 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:12.033 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:12.033 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:12.033 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=717 00:18:12.033 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:12.033 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:12.033 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.033 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:12.033 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:12.033 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.033 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.033 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.033 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.033 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.033 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.033 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.033 "name": "raid_bdev1", 00:18:12.033 "uuid": "c0f9bf3e-e135-468b-96ec-bc0761a0780a", 00:18:12.033 "strip_size_kb": 0, 00:18:12.033 "state": "online", 00:18:12.033 "raid_level": "raid1", 00:18:12.033 "superblock": true, 00:18:12.033 "num_base_bdevs": 2, 00:18:12.033 "num_base_bdevs_discovered": 2, 00:18:12.033 "num_base_bdevs_operational": 2, 00:18:12.033 "process": { 00:18:12.033 "type": "rebuild", 00:18:12.033 "target": "spare", 00:18:12.033 "progress": { 00:18:12.033 "blocks": 2816, 00:18:12.033 "percent": 35 00:18:12.033 } 00:18:12.033 }, 00:18:12.033 "base_bdevs_list": [ 00:18:12.033 { 00:18:12.033 "name": "spare", 00:18:12.033 "uuid": "c84d92cd-f49f-5c2a-a709-03b0cbc95cba", 00:18:12.033 "is_configured": true, 00:18:12.033 "data_offset": 256, 00:18:12.033 "data_size": 7936 00:18:12.033 }, 00:18:12.033 { 00:18:12.033 "name": "BaseBdev2", 00:18:12.033 "uuid": "df120221-bcb6-52cb-a036-6b66aa8ea67b", 00:18:12.033 "is_configured": true, 00:18:12.033 "data_offset": 256, 00:18:12.033 "data_size": 7936 00:18:12.033 } 00:18:12.033 ] 00:18:12.033 }' 00:18:12.033 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.033 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:12.033 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.033 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:12.033 11:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:12.970 11:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:12.970 11:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:12.970 11:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.970 11:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:12.970 11:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:12.970 11:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.970 11:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.970 11:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.970 11:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.970 11:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.970 11:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.970 11:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.970 "name": "raid_bdev1", 00:18:12.970 "uuid": "c0f9bf3e-e135-468b-96ec-bc0761a0780a", 00:18:12.970 "strip_size_kb": 0, 00:18:12.970 "state": "online", 00:18:12.970 "raid_level": "raid1", 00:18:12.970 "superblock": true, 00:18:12.970 "num_base_bdevs": 2, 00:18:12.970 "num_base_bdevs_discovered": 2, 00:18:12.970 "num_base_bdevs_operational": 2, 00:18:12.970 "process": { 00:18:12.970 "type": "rebuild", 00:18:12.970 "target": "spare", 00:18:12.970 "progress": { 00:18:12.970 "blocks": 5632, 00:18:12.970 "percent": 70 00:18:12.970 } 00:18:12.970 }, 00:18:12.970 "base_bdevs_list": [ 00:18:12.970 { 00:18:12.970 "name": "spare", 00:18:12.970 "uuid": "c84d92cd-f49f-5c2a-a709-03b0cbc95cba", 00:18:12.970 "is_configured": true, 00:18:12.970 "data_offset": 256, 00:18:12.970 "data_size": 7936 00:18:12.970 }, 00:18:12.970 { 00:18:12.970 "name": "BaseBdev2", 00:18:12.970 "uuid": "df120221-bcb6-52cb-a036-6b66aa8ea67b", 00:18:12.970 "is_configured": true, 00:18:12.970 "data_offset": 256, 00:18:12.970 "data_size": 7936 00:18:12.970 } 00:18:12.970 ] 00:18:12.970 }' 00:18:12.970 11:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.230 11:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:13.230 11:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.230 11:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:13.230 11:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:13.798 [2024-11-15 11:03:20.652983] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:13.798 [2024-11-15 11:03:20.653083] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:13.799 [2024-11-15 11:03:20.653202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.057 11:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:14.057 11:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:14.057 11:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.057 11:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:14.057 11:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:14.057 11:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.317 11:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.317 11:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.317 11:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.317 11:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.317 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.317 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.317 "name": "raid_bdev1", 00:18:14.317 "uuid": "c0f9bf3e-e135-468b-96ec-bc0761a0780a", 00:18:14.317 "strip_size_kb": 0, 00:18:14.317 "state": "online", 00:18:14.317 "raid_level": "raid1", 00:18:14.317 "superblock": true, 00:18:14.317 "num_base_bdevs": 2, 00:18:14.317 "num_base_bdevs_discovered": 2, 00:18:14.317 "num_base_bdevs_operational": 2, 00:18:14.317 "base_bdevs_list": [ 00:18:14.317 { 00:18:14.317 "name": "spare", 00:18:14.317 "uuid": "c84d92cd-f49f-5c2a-a709-03b0cbc95cba", 00:18:14.317 "is_configured": true, 00:18:14.317 "data_offset": 256, 00:18:14.317 "data_size": 7936 00:18:14.317 }, 00:18:14.317 { 00:18:14.317 "name": "BaseBdev2", 00:18:14.317 "uuid": "df120221-bcb6-52cb-a036-6b66aa8ea67b", 00:18:14.317 "is_configured": true, 00:18:14.317 "data_offset": 256, 00:18:14.317 "data_size": 7936 00:18:14.317 } 00:18:14.317 ] 00:18:14.317 }' 00:18:14.317 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.317 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:14.317 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.317 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:14.317 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:14.317 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:14.317 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.317 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:14.317 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:14.317 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.317 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.317 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.317 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.317 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.317 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.317 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.317 "name": "raid_bdev1", 00:18:14.317 "uuid": "c0f9bf3e-e135-468b-96ec-bc0761a0780a", 00:18:14.317 "strip_size_kb": 0, 00:18:14.317 "state": "online", 00:18:14.317 "raid_level": "raid1", 00:18:14.317 "superblock": true, 00:18:14.317 "num_base_bdevs": 2, 00:18:14.317 "num_base_bdevs_discovered": 2, 00:18:14.317 "num_base_bdevs_operational": 2, 00:18:14.317 "base_bdevs_list": [ 00:18:14.317 { 00:18:14.317 "name": "spare", 00:18:14.317 "uuid": "c84d92cd-f49f-5c2a-a709-03b0cbc95cba", 00:18:14.317 "is_configured": true, 00:18:14.317 "data_offset": 256, 00:18:14.317 "data_size": 7936 00:18:14.317 }, 00:18:14.317 { 00:18:14.317 "name": "BaseBdev2", 00:18:14.317 "uuid": "df120221-bcb6-52cb-a036-6b66aa8ea67b", 00:18:14.317 "is_configured": true, 00:18:14.317 "data_offset": 256, 00:18:14.317 "data_size": 7936 00:18:14.317 } 00:18:14.317 ] 00:18:14.317 }' 00:18:14.317 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.317 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:14.317 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.576 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:14.576 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:14.576 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.576 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.576 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.576 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.576 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:14.576 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.576 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.576 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.576 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.576 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.576 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.576 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.576 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.576 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.576 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.576 "name": "raid_bdev1", 00:18:14.576 "uuid": "c0f9bf3e-e135-468b-96ec-bc0761a0780a", 00:18:14.576 "strip_size_kb": 0, 00:18:14.576 "state": "online", 00:18:14.576 "raid_level": "raid1", 00:18:14.576 "superblock": true, 00:18:14.576 "num_base_bdevs": 2, 00:18:14.576 "num_base_bdevs_discovered": 2, 00:18:14.576 "num_base_bdevs_operational": 2, 00:18:14.576 "base_bdevs_list": [ 00:18:14.576 { 00:18:14.576 "name": "spare", 00:18:14.576 "uuid": "c84d92cd-f49f-5c2a-a709-03b0cbc95cba", 00:18:14.576 "is_configured": true, 00:18:14.576 "data_offset": 256, 00:18:14.576 "data_size": 7936 00:18:14.576 }, 00:18:14.576 { 00:18:14.576 "name": "BaseBdev2", 00:18:14.576 "uuid": "df120221-bcb6-52cb-a036-6b66aa8ea67b", 00:18:14.576 "is_configured": true, 00:18:14.576 "data_offset": 256, 00:18:14.576 "data_size": 7936 00:18:14.576 } 00:18:14.576 ] 00:18:14.576 }' 00:18:14.576 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.576 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.835 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:14.835 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.835 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.835 [2024-11-15 11:03:21.716917] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:14.836 [2024-11-15 11:03:21.716964] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:14.836 [2024-11-15 11:03:21.717067] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:14.836 [2024-11-15 11:03:21.717143] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:14.836 [2024-11-15 11:03:21.717154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:14.836 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.836 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.836 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.836 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.836 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:14.836 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.095 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:15.095 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:15.095 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:15.095 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:15.095 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:15.095 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:15.095 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:15.095 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:15.095 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:15.095 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:15.095 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:15.095 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:15.095 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:15.095 /dev/nbd0 00:18:15.095 11:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:15.095 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:15.095 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:15.095 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:18:15.095 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:15.095 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:15.095 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:15.095 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:18:15.095 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:15.095 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:15.095 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:15.095 1+0 records in 00:18:15.095 1+0 records out 00:18:15.095 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381304 s, 10.7 MB/s 00:18:15.095 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:15.354 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:18:15.354 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:15.354 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:15.354 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:18:15.354 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:15.354 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:15.354 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:15.354 /dev/nbd1 00:18:15.354 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:15.354 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:15.354 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:18:15.354 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:18:15.354 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:15.354 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:15.354 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:18:15.354 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:18:15.354 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:15.354 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:15.354 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:15.614 1+0 records in 00:18:15.614 1+0 records out 00:18:15.614 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478549 s, 8.6 MB/s 00:18:15.614 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:15.614 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:18:15.614 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:15.614 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:15.614 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:18:15.614 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:15.614 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:15.614 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:15.614 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:15.614 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:15.614 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:15.614 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:15.614 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:15.614 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:15.614 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:15.873 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:15.873 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:15.873 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:15.873 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:15.873 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:15.873 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:15.873 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:15.873 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:15.873 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:15.873 11:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:16.132 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:16.132 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:16.132 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:16.132 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:16.132 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:16.132 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:16.391 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:16.391 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:16.391 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:16.391 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:16.391 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.391 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.391 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.391 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:16.391 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.392 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.392 [2024-11-15 11:03:23.079824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:16.392 [2024-11-15 11:03:23.079911] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.392 [2024-11-15 11:03:23.079938] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:16.392 [2024-11-15 11:03:23.079948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.392 [2024-11-15 11:03:23.082067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.392 [2024-11-15 11:03:23.082108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:16.392 [2024-11-15 11:03:23.082186] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:16.392 [2024-11-15 11:03:23.082253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:16.392 [2024-11-15 11:03:23.082430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:16.392 spare 00:18:16.392 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.392 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:16.392 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.392 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.392 [2024-11-15 11:03:23.182347] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:16.392 [2024-11-15 11:03:23.182419] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:16.392 [2024-11-15 11:03:23.182595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:16.392 [2024-11-15 11:03:23.182802] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:16.392 [2024-11-15 11:03:23.182823] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:16.392 [2024-11-15 11:03:23.182986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.392 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.392 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:16.392 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.392 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.392 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.392 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.392 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:16.392 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.392 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.392 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.392 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.392 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.392 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.392 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.392 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.392 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.392 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.392 "name": "raid_bdev1", 00:18:16.392 "uuid": "c0f9bf3e-e135-468b-96ec-bc0761a0780a", 00:18:16.392 "strip_size_kb": 0, 00:18:16.392 "state": "online", 00:18:16.392 "raid_level": "raid1", 00:18:16.392 "superblock": true, 00:18:16.392 "num_base_bdevs": 2, 00:18:16.392 "num_base_bdevs_discovered": 2, 00:18:16.392 "num_base_bdevs_operational": 2, 00:18:16.392 "base_bdevs_list": [ 00:18:16.392 { 00:18:16.392 "name": "spare", 00:18:16.392 "uuid": "c84d92cd-f49f-5c2a-a709-03b0cbc95cba", 00:18:16.392 "is_configured": true, 00:18:16.392 "data_offset": 256, 00:18:16.392 "data_size": 7936 00:18:16.392 }, 00:18:16.392 { 00:18:16.392 "name": "BaseBdev2", 00:18:16.392 "uuid": "df120221-bcb6-52cb-a036-6b66aa8ea67b", 00:18:16.392 "is_configured": true, 00:18:16.392 "data_offset": 256, 00:18:16.392 "data_size": 7936 00:18:16.392 } 00:18:16.392 ] 00:18:16.392 }' 00:18:16.392 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.392 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.961 "name": "raid_bdev1", 00:18:16.961 "uuid": "c0f9bf3e-e135-468b-96ec-bc0761a0780a", 00:18:16.961 "strip_size_kb": 0, 00:18:16.961 "state": "online", 00:18:16.961 "raid_level": "raid1", 00:18:16.961 "superblock": true, 00:18:16.961 "num_base_bdevs": 2, 00:18:16.961 "num_base_bdevs_discovered": 2, 00:18:16.961 "num_base_bdevs_operational": 2, 00:18:16.961 "base_bdevs_list": [ 00:18:16.961 { 00:18:16.961 "name": "spare", 00:18:16.961 "uuid": "c84d92cd-f49f-5c2a-a709-03b0cbc95cba", 00:18:16.961 "is_configured": true, 00:18:16.961 "data_offset": 256, 00:18:16.961 "data_size": 7936 00:18:16.961 }, 00:18:16.961 { 00:18:16.961 "name": "BaseBdev2", 00:18:16.961 "uuid": "df120221-bcb6-52cb-a036-6b66aa8ea67b", 00:18:16.961 "is_configured": true, 00:18:16.961 "data_offset": 256, 00:18:16.961 "data_size": 7936 00:18:16.961 } 00:18:16.961 ] 00:18:16.961 }' 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.961 [2024-11-15 11:03:23.846631] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.961 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.221 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.221 "name": "raid_bdev1", 00:18:17.221 "uuid": "c0f9bf3e-e135-468b-96ec-bc0761a0780a", 00:18:17.221 "strip_size_kb": 0, 00:18:17.221 "state": "online", 00:18:17.221 "raid_level": "raid1", 00:18:17.221 "superblock": true, 00:18:17.221 "num_base_bdevs": 2, 00:18:17.221 "num_base_bdevs_discovered": 1, 00:18:17.221 "num_base_bdevs_operational": 1, 00:18:17.221 "base_bdevs_list": [ 00:18:17.221 { 00:18:17.221 "name": null, 00:18:17.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.221 "is_configured": false, 00:18:17.221 "data_offset": 0, 00:18:17.221 "data_size": 7936 00:18:17.221 }, 00:18:17.221 { 00:18:17.221 "name": "BaseBdev2", 00:18:17.221 "uuid": "df120221-bcb6-52cb-a036-6b66aa8ea67b", 00:18:17.221 "is_configured": true, 00:18:17.221 "data_offset": 256, 00:18:17.221 "data_size": 7936 00:18:17.221 } 00:18:17.221 ] 00:18:17.221 }' 00:18:17.221 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.221 11:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.480 11:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:17.480 11:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.480 11:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.480 [2024-11-15 11:03:24.313832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:17.480 [2024-11-15 11:03:24.314046] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:17.480 [2024-11-15 11:03:24.314065] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:17.480 [2024-11-15 11:03:24.314100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:17.480 [2024-11-15 11:03:24.329347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:17.480 11:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.480 11:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:17.480 [2024-11-15 11:03:24.331399] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:18.456 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:18.456 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.456 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:18.456 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:18.456 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.456 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.456 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.456 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.456 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.456 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.715 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.715 "name": "raid_bdev1", 00:18:18.715 "uuid": "c0f9bf3e-e135-468b-96ec-bc0761a0780a", 00:18:18.715 "strip_size_kb": 0, 00:18:18.715 "state": "online", 00:18:18.715 "raid_level": "raid1", 00:18:18.715 "superblock": true, 00:18:18.715 "num_base_bdevs": 2, 00:18:18.715 "num_base_bdevs_discovered": 2, 00:18:18.715 "num_base_bdevs_operational": 2, 00:18:18.715 "process": { 00:18:18.715 "type": "rebuild", 00:18:18.715 "target": "spare", 00:18:18.715 "progress": { 00:18:18.715 "blocks": 2560, 00:18:18.715 "percent": 32 00:18:18.715 } 00:18:18.715 }, 00:18:18.715 "base_bdevs_list": [ 00:18:18.715 { 00:18:18.715 "name": "spare", 00:18:18.715 "uuid": "c84d92cd-f49f-5c2a-a709-03b0cbc95cba", 00:18:18.715 "is_configured": true, 00:18:18.715 "data_offset": 256, 00:18:18.715 "data_size": 7936 00:18:18.715 }, 00:18:18.715 { 00:18:18.715 "name": "BaseBdev2", 00:18:18.715 "uuid": "df120221-bcb6-52cb-a036-6b66aa8ea67b", 00:18:18.715 "is_configured": true, 00:18:18.715 "data_offset": 256, 00:18:18.715 "data_size": 7936 00:18:18.715 } 00:18:18.715 ] 00:18:18.715 }' 00:18:18.715 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.715 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:18.715 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.715 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:18.715 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:18.715 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.715 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.715 [2024-11-15 11:03:25.492913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:18.715 [2024-11-15 11:03:25.537393] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:18.715 [2024-11-15 11:03:25.537482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.715 [2024-11-15 11:03:25.537499] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:18.715 [2024-11-15 11:03:25.537521] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:18.715 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.715 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:18.715 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.715 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.715 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.715 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.715 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:18.715 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.715 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.715 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.715 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.715 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.715 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.716 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.716 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.716 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.716 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.716 "name": "raid_bdev1", 00:18:18.716 "uuid": "c0f9bf3e-e135-468b-96ec-bc0761a0780a", 00:18:18.716 "strip_size_kb": 0, 00:18:18.716 "state": "online", 00:18:18.716 "raid_level": "raid1", 00:18:18.716 "superblock": true, 00:18:18.716 "num_base_bdevs": 2, 00:18:18.716 "num_base_bdevs_discovered": 1, 00:18:18.716 "num_base_bdevs_operational": 1, 00:18:18.716 "base_bdevs_list": [ 00:18:18.716 { 00:18:18.716 "name": null, 00:18:18.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.716 "is_configured": false, 00:18:18.716 "data_offset": 0, 00:18:18.716 "data_size": 7936 00:18:18.716 }, 00:18:18.716 { 00:18:18.716 "name": "BaseBdev2", 00:18:18.716 "uuid": "df120221-bcb6-52cb-a036-6b66aa8ea67b", 00:18:18.716 "is_configured": true, 00:18:18.716 "data_offset": 256, 00:18:18.716 "data_size": 7936 00:18:18.716 } 00:18:18.716 ] 00:18:18.716 }' 00:18:18.716 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.716 11:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.283 11:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:19.283 11:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.283 11:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.283 [2024-11-15 11:03:26.050399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:19.283 [2024-11-15 11:03:26.050471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.283 [2024-11-15 11:03:26.050499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:19.283 [2024-11-15 11:03:26.050511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.283 [2024-11-15 11:03:26.050795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.283 [2024-11-15 11:03:26.050819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:19.283 [2024-11-15 11:03:26.050886] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:19.283 [2024-11-15 11:03:26.050901] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:19.283 [2024-11-15 11:03:26.050911] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:19.283 [2024-11-15 11:03:26.050937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:19.283 [2024-11-15 11:03:26.066672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:19.283 spare 00:18:19.283 11:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.283 [2024-11-15 11:03:26.068600] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:19.283 11:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:20.223 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.223 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.223 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:20.223 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:20.223 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.223 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.223 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.223 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.223 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.223 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.223 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.223 "name": "raid_bdev1", 00:18:20.223 "uuid": "c0f9bf3e-e135-468b-96ec-bc0761a0780a", 00:18:20.223 "strip_size_kb": 0, 00:18:20.223 "state": "online", 00:18:20.223 "raid_level": "raid1", 00:18:20.223 "superblock": true, 00:18:20.223 "num_base_bdevs": 2, 00:18:20.223 "num_base_bdevs_discovered": 2, 00:18:20.223 "num_base_bdevs_operational": 2, 00:18:20.223 "process": { 00:18:20.223 "type": "rebuild", 00:18:20.223 "target": "spare", 00:18:20.223 "progress": { 00:18:20.223 "blocks": 2560, 00:18:20.223 "percent": 32 00:18:20.223 } 00:18:20.223 }, 00:18:20.223 "base_bdevs_list": [ 00:18:20.223 { 00:18:20.223 "name": "spare", 00:18:20.223 "uuid": "c84d92cd-f49f-5c2a-a709-03b0cbc95cba", 00:18:20.223 "is_configured": true, 00:18:20.223 "data_offset": 256, 00:18:20.223 "data_size": 7936 00:18:20.223 }, 00:18:20.223 { 00:18:20.223 "name": "BaseBdev2", 00:18:20.223 "uuid": "df120221-bcb6-52cb-a036-6b66aa8ea67b", 00:18:20.223 "is_configured": true, 00:18:20.223 "data_offset": 256, 00:18:20.223 "data_size": 7936 00:18:20.223 } 00:18:20.223 ] 00:18:20.223 }' 00:18:20.223 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.483 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:20.483 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.483 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:20.483 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:20.483 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.483 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.483 [2024-11-15 11:03:27.224923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:20.483 [2024-11-15 11:03:27.274675] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:20.483 [2024-11-15 11:03:27.274756] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.483 [2024-11-15 11:03:27.274776] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:20.483 [2024-11-15 11:03:27.274784] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:20.483 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.483 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:20.483 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.483 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.483 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.483 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.483 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:20.483 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.483 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.483 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.483 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.483 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.483 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.483 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.483 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.483 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.483 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.483 "name": "raid_bdev1", 00:18:20.483 "uuid": "c0f9bf3e-e135-468b-96ec-bc0761a0780a", 00:18:20.483 "strip_size_kb": 0, 00:18:20.483 "state": "online", 00:18:20.483 "raid_level": "raid1", 00:18:20.483 "superblock": true, 00:18:20.483 "num_base_bdevs": 2, 00:18:20.483 "num_base_bdevs_discovered": 1, 00:18:20.483 "num_base_bdevs_operational": 1, 00:18:20.483 "base_bdevs_list": [ 00:18:20.483 { 00:18:20.483 "name": null, 00:18:20.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.483 "is_configured": false, 00:18:20.483 "data_offset": 0, 00:18:20.483 "data_size": 7936 00:18:20.483 }, 00:18:20.483 { 00:18:20.483 "name": "BaseBdev2", 00:18:20.483 "uuid": "df120221-bcb6-52cb-a036-6b66aa8ea67b", 00:18:20.483 "is_configured": true, 00:18:20.483 "data_offset": 256, 00:18:20.483 "data_size": 7936 00:18:20.483 } 00:18:20.483 ] 00:18:20.483 }' 00:18:20.483 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.483 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.052 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:21.052 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.052 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:21.052 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:21.052 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.052 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.052 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.052 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.052 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.052 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.052 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.052 "name": "raid_bdev1", 00:18:21.052 "uuid": "c0f9bf3e-e135-468b-96ec-bc0761a0780a", 00:18:21.052 "strip_size_kb": 0, 00:18:21.052 "state": "online", 00:18:21.052 "raid_level": "raid1", 00:18:21.052 "superblock": true, 00:18:21.052 "num_base_bdevs": 2, 00:18:21.052 "num_base_bdevs_discovered": 1, 00:18:21.052 "num_base_bdevs_operational": 1, 00:18:21.052 "base_bdevs_list": [ 00:18:21.052 { 00:18:21.052 "name": null, 00:18:21.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.052 "is_configured": false, 00:18:21.052 "data_offset": 0, 00:18:21.052 "data_size": 7936 00:18:21.052 }, 00:18:21.052 { 00:18:21.052 "name": "BaseBdev2", 00:18:21.052 "uuid": "df120221-bcb6-52cb-a036-6b66aa8ea67b", 00:18:21.052 "is_configured": true, 00:18:21.052 "data_offset": 256, 00:18:21.052 "data_size": 7936 00:18:21.052 } 00:18:21.052 ] 00:18:21.052 }' 00:18:21.052 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.052 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:21.052 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.052 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:21.052 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:21.052 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.052 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.052 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.052 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:21.052 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.052 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.052 [2024-11-15 11:03:27.895311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:21.052 [2024-11-15 11:03:27.895381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.052 [2024-11-15 11:03:27.895407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:21.052 [2024-11-15 11:03:27.895417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.052 [2024-11-15 11:03:27.895636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.052 [2024-11-15 11:03:27.895648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:21.052 [2024-11-15 11:03:27.895708] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:21.052 [2024-11-15 11:03:27.895724] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:21.052 [2024-11-15 11:03:27.895733] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:21.052 [2024-11-15 11:03:27.895743] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:21.052 BaseBdev1 00:18:21.052 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.052 11:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:21.990 11:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:21.990 11:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.990 11:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.990 11:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.990 11:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.990 11:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:21.990 11:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.990 11:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.990 11:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.990 11:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.990 11:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.990 11:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.990 11:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.990 11:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.249 11:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.249 11:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.249 "name": "raid_bdev1", 00:18:22.249 "uuid": "c0f9bf3e-e135-468b-96ec-bc0761a0780a", 00:18:22.249 "strip_size_kb": 0, 00:18:22.249 "state": "online", 00:18:22.249 "raid_level": "raid1", 00:18:22.249 "superblock": true, 00:18:22.249 "num_base_bdevs": 2, 00:18:22.249 "num_base_bdevs_discovered": 1, 00:18:22.249 "num_base_bdevs_operational": 1, 00:18:22.249 "base_bdevs_list": [ 00:18:22.249 { 00:18:22.249 "name": null, 00:18:22.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.249 "is_configured": false, 00:18:22.249 "data_offset": 0, 00:18:22.249 "data_size": 7936 00:18:22.249 }, 00:18:22.249 { 00:18:22.249 "name": "BaseBdev2", 00:18:22.249 "uuid": "df120221-bcb6-52cb-a036-6b66aa8ea67b", 00:18:22.249 "is_configured": true, 00:18:22.249 "data_offset": 256, 00:18:22.249 "data_size": 7936 00:18:22.249 } 00:18:22.249 ] 00:18:22.249 }' 00:18:22.249 11:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.249 11:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.508 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:22.508 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.508 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:22.509 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:22.509 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.509 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.509 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.509 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.509 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.509 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.509 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.509 "name": "raid_bdev1", 00:18:22.509 "uuid": "c0f9bf3e-e135-468b-96ec-bc0761a0780a", 00:18:22.509 "strip_size_kb": 0, 00:18:22.509 "state": "online", 00:18:22.509 "raid_level": "raid1", 00:18:22.509 "superblock": true, 00:18:22.509 "num_base_bdevs": 2, 00:18:22.509 "num_base_bdevs_discovered": 1, 00:18:22.509 "num_base_bdevs_operational": 1, 00:18:22.509 "base_bdevs_list": [ 00:18:22.509 { 00:18:22.509 "name": null, 00:18:22.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.509 "is_configured": false, 00:18:22.509 "data_offset": 0, 00:18:22.509 "data_size": 7936 00:18:22.509 }, 00:18:22.509 { 00:18:22.509 "name": "BaseBdev2", 00:18:22.509 "uuid": "df120221-bcb6-52cb-a036-6b66aa8ea67b", 00:18:22.509 "is_configured": true, 00:18:22.509 "data_offset": 256, 00:18:22.509 "data_size": 7936 00:18:22.509 } 00:18:22.509 ] 00:18:22.509 }' 00:18:22.509 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.509 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:22.509 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.768 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:22.768 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:22.768 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:18:22.768 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:22.768 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:22.768 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.768 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:22.768 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.768 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:22.768 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.768 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.768 [2024-11-15 11:03:29.460723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:22.768 [2024-11-15 11:03:29.460911] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:22.768 [2024-11-15 11:03:29.460932] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:22.768 request: 00:18:22.768 { 00:18:22.768 "base_bdev": "BaseBdev1", 00:18:22.768 "raid_bdev": "raid_bdev1", 00:18:22.768 "method": "bdev_raid_add_base_bdev", 00:18:22.768 "req_id": 1 00:18:22.768 } 00:18:22.768 Got JSON-RPC error response 00:18:22.768 response: 00:18:22.768 { 00:18:22.768 "code": -22, 00:18:22.768 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:22.768 } 00:18:22.768 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:22.768 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:18:22.768 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:22.768 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:22.768 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:22.768 11:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:23.709 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:23.709 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.709 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.709 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.709 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.709 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:23.709 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.709 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.709 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.709 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.709 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.709 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.709 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.709 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.709 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.709 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.709 "name": "raid_bdev1", 00:18:23.709 "uuid": "c0f9bf3e-e135-468b-96ec-bc0761a0780a", 00:18:23.709 "strip_size_kb": 0, 00:18:23.709 "state": "online", 00:18:23.709 "raid_level": "raid1", 00:18:23.709 "superblock": true, 00:18:23.709 "num_base_bdevs": 2, 00:18:23.709 "num_base_bdevs_discovered": 1, 00:18:23.709 "num_base_bdevs_operational": 1, 00:18:23.709 "base_bdevs_list": [ 00:18:23.709 { 00:18:23.709 "name": null, 00:18:23.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.709 "is_configured": false, 00:18:23.709 "data_offset": 0, 00:18:23.709 "data_size": 7936 00:18:23.709 }, 00:18:23.709 { 00:18:23.709 "name": "BaseBdev2", 00:18:23.709 "uuid": "df120221-bcb6-52cb-a036-6b66aa8ea67b", 00:18:23.709 "is_configured": true, 00:18:23.709 "data_offset": 256, 00:18:23.709 "data_size": 7936 00:18:23.709 } 00:18:23.709 ] 00:18:23.709 }' 00:18:23.709 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.709 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.278 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:24.278 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.278 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:24.278 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:24.278 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.278 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.278 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.278 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.278 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.278 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.278 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.278 "name": "raid_bdev1", 00:18:24.278 "uuid": "c0f9bf3e-e135-468b-96ec-bc0761a0780a", 00:18:24.278 "strip_size_kb": 0, 00:18:24.278 "state": "online", 00:18:24.278 "raid_level": "raid1", 00:18:24.278 "superblock": true, 00:18:24.278 "num_base_bdevs": 2, 00:18:24.278 "num_base_bdevs_discovered": 1, 00:18:24.278 "num_base_bdevs_operational": 1, 00:18:24.278 "base_bdevs_list": [ 00:18:24.278 { 00:18:24.278 "name": null, 00:18:24.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.278 "is_configured": false, 00:18:24.278 "data_offset": 0, 00:18:24.278 "data_size": 7936 00:18:24.278 }, 00:18:24.278 { 00:18:24.278 "name": "BaseBdev2", 00:18:24.278 "uuid": "df120221-bcb6-52cb-a036-6b66aa8ea67b", 00:18:24.278 "is_configured": true, 00:18:24.278 "data_offset": 256, 00:18:24.278 "data_size": 7936 00:18:24.278 } 00:18:24.278 ] 00:18:24.278 }' 00:18:24.278 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.278 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:24.278 11:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.278 11:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:24.278 11:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87953 00:18:24.278 11:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 87953 ']' 00:18:24.278 11:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 87953 00:18:24.278 11:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:18:24.278 11:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:24.278 11:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87953 00:18:24.278 11:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:24.278 11:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:24.278 killing process with pid 87953 00:18:24.278 11:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87953' 00:18:24.278 11:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 87953 00:18:24.278 Received shutdown signal, test time was about 60.000000 seconds 00:18:24.278 00:18:24.278 Latency(us) 00:18:24.278 [2024-11-15T11:03:31.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.278 [2024-11-15T11:03:31.206Z] =================================================================================================================== 00:18:24.278 [2024-11-15T11:03:31.206Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:24.278 [2024-11-15 11:03:31.060959] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:24.278 11:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 87953 00:18:24.278 [2024-11-15 11:03:31.061100] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.278 [2024-11-15 11:03:31.061156] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.278 [2024-11-15 11:03:31.061169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:24.537 [2024-11-15 11:03:31.391781] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:25.915 11:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:25.915 00:18:25.915 real 0m20.381s 00:18:25.915 user 0m26.729s 00:18:25.915 sys 0m2.679s 00:18:25.915 11:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:25.915 11:03:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.915 ************************************ 00:18:25.915 END TEST raid_rebuild_test_sb_md_separate 00:18:25.915 ************************************ 00:18:25.915 11:03:32 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:25.915 11:03:32 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:25.915 11:03:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:25.915 11:03:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:25.915 11:03:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:25.915 ************************************ 00:18:25.915 START TEST raid_state_function_test_sb_md_interleaved 00:18:25.915 ************************************ 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88646 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88646' 00:18:25.916 Process raid pid: 88646 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88646 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 88646 ']' 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:25.916 11:03:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.916 [2024-11-15 11:03:32.675790] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:18:25.916 [2024-11-15 11:03:32.675928] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.916 [2024-11-15 11:03:32.832700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.175 [2024-11-15 11:03:32.960606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.434 [2024-11-15 11:03:33.160052] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:26.434 [2024-11-15 11:03:33.160105] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:26.693 11:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:26.693 11:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:18:26.693 11:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:26.693 11:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.693 11:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.693 [2024-11-15 11:03:33.534377] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:26.693 [2024-11-15 11:03:33.534430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:26.693 [2024-11-15 11:03:33.534442] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:26.693 [2024-11-15 11:03:33.534468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:26.693 11:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.693 11:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:26.693 11:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:26.693 11:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:26.693 11:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.693 11:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.693 11:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:26.693 11:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.693 11:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.693 11:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.693 11:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.693 11:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.693 11:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.693 11:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.693 11:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.693 11:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.693 11:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.693 "name": "Existed_Raid", 00:18:26.693 "uuid": "3a7d75cd-297c-408a-bd1e-1ee0fb30c2af", 00:18:26.693 "strip_size_kb": 0, 00:18:26.693 "state": "configuring", 00:18:26.693 "raid_level": "raid1", 00:18:26.693 "superblock": true, 00:18:26.693 "num_base_bdevs": 2, 00:18:26.693 "num_base_bdevs_discovered": 0, 00:18:26.693 "num_base_bdevs_operational": 2, 00:18:26.693 "base_bdevs_list": [ 00:18:26.693 { 00:18:26.693 "name": "BaseBdev1", 00:18:26.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.693 "is_configured": false, 00:18:26.693 "data_offset": 0, 00:18:26.693 "data_size": 0 00:18:26.693 }, 00:18:26.693 { 00:18:26.693 "name": "BaseBdev2", 00:18:26.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.693 "is_configured": false, 00:18:26.693 "data_offset": 0, 00:18:26.693 "data_size": 0 00:18:26.693 } 00:18:26.693 ] 00:18:26.693 }' 00:18:26.693 11:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.693 11:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.262 [2024-11-15 11:03:34.029472] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:27.262 [2024-11-15 11:03:34.029514] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.262 [2024-11-15 11:03:34.041485] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:27.262 [2024-11-15 11:03:34.041527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:27.262 [2024-11-15 11:03:34.041537] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:27.262 [2024-11-15 11:03:34.041550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.262 [2024-11-15 11:03:34.090132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:27.262 BaseBdev1 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.262 [ 00:18:27.262 { 00:18:27.262 "name": "BaseBdev1", 00:18:27.262 "aliases": [ 00:18:27.262 "0cd04a83-0003-4d35-b11f-fa575ad22a50" 00:18:27.262 ], 00:18:27.262 "product_name": "Malloc disk", 00:18:27.262 "block_size": 4128, 00:18:27.262 "num_blocks": 8192, 00:18:27.262 "uuid": "0cd04a83-0003-4d35-b11f-fa575ad22a50", 00:18:27.262 "md_size": 32, 00:18:27.262 "md_interleave": true, 00:18:27.262 "dif_type": 0, 00:18:27.262 "assigned_rate_limits": { 00:18:27.262 "rw_ios_per_sec": 0, 00:18:27.262 "rw_mbytes_per_sec": 0, 00:18:27.262 "r_mbytes_per_sec": 0, 00:18:27.262 "w_mbytes_per_sec": 0 00:18:27.262 }, 00:18:27.262 "claimed": true, 00:18:27.262 "claim_type": "exclusive_write", 00:18:27.262 "zoned": false, 00:18:27.262 "supported_io_types": { 00:18:27.262 "read": true, 00:18:27.262 "write": true, 00:18:27.262 "unmap": true, 00:18:27.262 "flush": true, 00:18:27.262 "reset": true, 00:18:27.262 "nvme_admin": false, 00:18:27.262 "nvme_io": false, 00:18:27.262 "nvme_io_md": false, 00:18:27.262 "write_zeroes": true, 00:18:27.262 "zcopy": true, 00:18:27.262 "get_zone_info": false, 00:18:27.262 "zone_management": false, 00:18:27.262 "zone_append": false, 00:18:27.262 "compare": false, 00:18:27.262 "compare_and_write": false, 00:18:27.262 "abort": true, 00:18:27.262 "seek_hole": false, 00:18:27.262 "seek_data": false, 00:18:27.262 "copy": true, 00:18:27.262 "nvme_iov_md": false 00:18:27.262 }, 00:18:27.262 "memory_domains": [ 00:18:27.262 { 00:18:27.262 "dma_device_id": "system", 00:18:27.262 "dma_device_type": 1 00:18:27.262 }, 00:18:27.262 { 00:18:27.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.262 "dma_device_type": 2 00:18:27.262 } 00:18:27.262 ], 00:18:27.262 "driver_specific": {} 00:18:27.262 } 00:18:27.262 ] 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.262 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:27.263 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.263 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.263 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.263 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.263 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.263 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.263 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.263 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.263 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.263 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.263 "name": "Existed_Raid", 00:18:27.263 "uuid": "40068940-c72f-4c8f-8dfb-175975544858", 00:18:27.263 "strip_size_kb": 0, 00:18:27.263 "state": "configuring", 00:18:27.263 "raid_level": "raid1", 00:18:27.263 "superblock": true, 00:18:27.263 "num_base_bdevs": 2, 00:18:27.263 "num_base_bdevs_discovered": 1, 00:18:27.263 "num_base_bdevs_operational": 2, 00:18:27.263 "base_bdevs_list": [ 00:18:27.263 { 00:18:27.263 "name": "BaseBdev1", 00:18:27.263 "uuid": "0cd04a83-0003-4d35-b11f-fa575ad22a50", 00:18:27.263 "is_configured": true, 00:18:27.263 "data_offset": 256, 00:18:27.263 "data_size": 7936 00:18:27.263 }, 00:18:27.263 { 00:18:27.263 "name": "BaseBdev2", 00:18:27.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.263 "is_configured": false, 00:18:27.263 "data_offset": 0, 00:18:27.263 "data_size": 0 00:18:27.263 } 00:18:27.263 ] 00:18:27.263 }' 00:18:27.263 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.263 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.830 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:27.830 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.830 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.830 [2024-11-15 11:03:34.553454] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:27.830 [2024-11-15 11:03:34.553522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:27.830 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.830 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:27.830 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.830 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.830 [2024-11-15 11:03:34.565517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:27.830 [2024-11-15 11:03:34.567494] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:27.830 [2024-11-15 11:03:34.567538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:27.830 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.830 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:27.830 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:27.830 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:27.830 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:27.830 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:27.830 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.831 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.831 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:27.831 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.831 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.831 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.831 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.831 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.831 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.831 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.831 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.831 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.831 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.831 "name": "Existed_Raid", 00:18:27.831 "uuid": "bfd65390-daa6-4f71-89cb-849c40002dae", 00:18:27.831 "strip_size_kb": 0, 00:18:27.831 "state": "configuring", 00:18:27.831 "raid_level": "raid1", 00:18:27.831 "superblock": true, 00:18:27.831 "num_base_bdevs": 2, 00:18:27.831 "num_base_bdevs_discovered": 1, 00:18:27.831 "num_base_bdevs_operational": 2, 00:18:27.831 "base_bdevs_list": [ 00:18:27.831 { 00:18:27.831 "name": "BaseBdev1", 00:18:27.831 "uuid": "0cd04a83-0003-4d35-b11f-fa575ad22a50", 00:18:27.831 "is_configured": true, 00:18:27.831 "data_offset": 256, 00:18:27.831 "data_size": 7936 00:18:27.831 }, 00:18:27.831 { 00:18:27.831 "name": "BaseBdev2", 00:18:27.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.831 "is_configured": false, 00:18:27.831 "data_offset": 0, 00:18:27.831 "data_size": 0 00:18:27.831 } 00:18:27.831 ] 00:18:27.831 }' 00:18:27.831 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.831 11:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.399 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:18:28.399 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.399 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.399 [2024-11-15 11:03:35.061100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:28.399 [2024-11-15 11:03:35.061350] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:28.399 [2024-11-15 11:03:35.061365] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:28.399 [2024-11-15 11:03:35.061453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:28.399 [2024-11-15 11:03:35.061525] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:28.399 [2024-11-15 11:03:35.061536] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:28.399 [2024-11-15 11:03:35.061614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.399 BaseBdev2 00:18:28.399 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.399 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:28.399 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:28.399 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:28.399 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:18:28.399 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:28.399 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:28.399 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:28.399 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.399 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.399 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.399 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:28.399 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.399 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.399 [ 00:18:28.399 { 00:18:28.399 "name": "BaseBdev2", 00:18:28.399 "aliases": [ 00:18:28.399 "bf42fe62-3f36-4d74-8a6b-09932091d66e" 00:18:28.399 ], 00:18:28.399 "product_name": "Malloc disk", 00:18:28.399 "block_size": 4128, 00:18:28.399 "num_blocks": 8192, 00:18:28.399 "uuid": "bf42fe62-3f36-4d74-8a6b-09932091d66e", 00:18:28.399 "md_size": 32, 00:18:28.399 "md_interleave": true, 00:18:28.399 "dif_type": 0, 00:18:28.399 "assigned_rate_limits": { 00:18:28.399 "rw_ios_per_sec": 0, 00:18:28.399 "rw_mbytes_per_sec": 0, 00:18:28.399 "r_mbytes_per_sec": 0, 00:18:28.399 "w_mbytes_per_sec": 0 00:18:28.399 }, 00:18:28.399 "claimed": true, 00:18:28.399 "claim_type": "exclusive_write", 00:18:28.399 "zoned": false, 00:18:28.399 "supported_io_types": { 00:18:28.399 "read": true, 00:18:28.399 "write": true, 00:18:28.399 "unmap": true, 00:18:28.399 "flush": true, 00:18:28.399 "reset": true, 00:18:28.400 "nvme_admin": false, 00:18:28.400 "nvme_io": false, 00:18:28.400 "nvme_io_md": false, 00:18:28.400 "write_zeroes": true, 00:18:28.400 "zcopy": true, 00:18:28.400 "get_zone_info": false, 00:18:28.400 "zone_management": false, 00:18:28.400 "zone_append": false, 00:18:28.400 "compare": false, 00:18:28.400 "compare_and_write": false, 00:18:28.400 "abort": true, 00:18:28.400 "seek_hole": false, 00:18:28.400 "seek_data": false, 00:18:28.400 "copy": true, 00:18:28.400 "nvme_iov_md": false 00:18:28.400 }, 00:18:28.400 "memory_domains": [ 00:18:28.400 { 00:18:28.400 "dma_device_id": "system", 00:18:28.400 "dma_device_type": 1 00:18:28.400 }, 00:18:28.400 { 00:18:28.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.400 "dma_device_type": 2 00:18:28.400 } 00:18:28.400 ], 00:18:28.400 "driver_specific": {} 00:18:28.400 } 00:18:28.400 ] 00:18:28.400 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.400 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:18:28.400 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:28.400 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:28.400 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:28.400 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:28.400 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.400 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.400 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.400 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:28.400 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.400 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.400 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.400 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.400 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.400 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.400 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.400 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.400 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.400 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.400 "name": "Existed_Raid", 00:18:28.400 "uuid": "bfd65390-daa6-4f71-89cb-849c40002dae", 00:18:28.400 "strip_size_kb": 0, 00:18:28.400 "state": "online", 00:18:28.400 "raid_level": "raid1", 00:18:28.400 "superblock": true, 00:18:28.400 "num_base_bdevs": 2, 00:18:28.400 "num_base_bdevs_discovered": 2, 00:18:28.400 "num_base_bdevs_operational": 2, 00:18:28.400 "base_bdevs_list": [ 00:18:28.400 { 00:18:28.400 "name": "BaseBdev1", 00:18:28.400 "uuid": "0cd04a83-0003-4d35-b11f-fa575ad22a50", 00:18:28.400 "is_configured": true, 00:18:28.400 "data_offset": 256, 00:18:28.400 "data_size": 7936 00:18:28.400 }, 00:18:28.400 { 00:18:28.400 "name": "BaseBdev2", 00:18:28.400 "uuid": "bf42fe62-3f36-4d74-8a6b-09932091d66e", 00:18:28.400 "is_configured": true, 00:18:28.400 "data_offset": 256, 00:18:28.400 "data_size": 7936 00:18:28.400 } 00:18:28.400 ] 00:18:28.400 }' 00:18:28.400 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.400 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.660 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:28.660 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:28.660 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:28.660 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:28.660 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:28.660 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:28.660 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:28.660 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.660 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:28.660 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.660 [2024-11-15 11:03:35.572694] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:28.933 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.933 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:28.933 "name": "Existed_Raid", 00:18:28.933 "aliases": [ 00:18:28.933 "bfd65390-daa6-4f71-89cb-849c40002dae" 00:18:28.933 ], 00:18:28.933 "product_name": "Raid Volume", 00:18:28.933 "block_size": 4128, 00:18:28.933 "num_blocks": 7936, 00:18:28.933 "uuid": "bfd65390-daa6-4f71-89cb-849c40002dae", 00:18:28.933 "md_size": 32, 00:18:28.933 "md_interleave": true, 00:18:28.933 "dif_type": 0, 00:18:28.933 "assigned_rate_limits": { 00:18:28.933 "rw_ios_per_sec": 0, 00:18:28.933 "rw_mbytes_per_sec": 0, 00:18:28.933 "r_mbytes_per_sec": 0, 00:18:28.933 "w_mbytes_per_sec": 0 00:18:28.933 }, 00:18:28.933 "claimed": false, 00:18:28.933 "zoned": false, 00:18:28.933 "supported_io_types": { 00:18:28.933 "read": true, 00:18:28.933 "write": true, 00:18:28.933 "unmap": false, 00:18:28.933 "flush": false, 00:18:28.933 "reset": true, 00:18:28.933 "nvme_admin": false, 00:18:28.933 "nvme_io": false, 00:18:28.933 "nvme_io_md": false, 00:18:28.933 "write_zeroes": true, 00:18:28.933 "zcopy": false, 00:18:28.933 "get_zone_info": false, 00:18:28.933 "zone_management": false, 00:18:28.933 "zone_append": false, 00:18:28.933 "compare": false, 00:18:28.933 "compare_and_write": false, 00:18:28.933 "abort": false, 00:18:28.933 "seek_hole": false, 00:18:28.933 "seek_data": false, 00:18:28.933 "copy": false, 00:18:28.933 "nvme_iov_md": false 00:18:28.933 }, 00:18:28.933 "memory_domains": [ 00:18:28.933 { 00:18:28.933 "dma_device_id": "system", 00:18:28.933 "dma_device_type": 1 00:18:28.933 }, 00:18:28.933 { 00:18:28.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.933 "dma_device_type": 2 00:18:28.933 }, 00:18:28.933 { 00:18:28.933 "dma_device_id": "system", 00:18:28.933 "dma_device_type": 1 00:18:28.933 }, 00:18:28.933 { 00:18:28.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.933 "dma_device_type": 2 00:18:28.933 } 00:18:28.933 ], 00:18:28.933 "driver_specific": { 00:18:28.933 "raid": { 00:18:28.933 "uuid": "bfd65390-daa6-4f71-89cb-849c40002dae", 00:18:28.933 "strip_size_kb": 0, 00:18:28.933 "state": "online", 00:18:28.933 "raid_level": "raid1", 00:18:28.933 "superblock": true, 00:18:28.933 "num_base_bdevs": 2, 00:18:28.933 "num_base_bdevs_discovered": 2, 00:18:28.933 "num_base_bdevs_operational": 2, 00:18:28.933 "base_bdevs_list": [ 00:18:28.933 { 00:18:28.933 "name": "BaseBdev1", 00:18:28.933 "uuid": "0cd04a83-0003-4d35-b11f-fa575ad22a50", 00:18:28.933 "is_configured": true, 00:18:28.933 "data_offset": 256, 00:18:28.933 "data_size": 7936 00:18:28.933 }, 00:18:28.933 { 00:18:28.933 "name": "BaseBdev2", 00:18:28.933 "uuid": "bf42fe62-3f36-4d74-8a6b-09932091d66e", 00:18:28.933 "is_configured": true, 00:18:28.933 "data_offset": 256, 00:18:28.933 "data_size": 7936 00:18:28.933 } 00:18:28.933 ] 00:18:28.933 } 00:18:28.933 } 00:18:28.933 }' 00:18:28.933 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:28.933 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:28.933 BaseBdev2' 00:18:28.933 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:28.933 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:28.933 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:28.933 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:28.933 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.933 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.933 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:28.933 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.933 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:28.933 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:28.933 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:28.933 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:28.933 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.933 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.933 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:28.933 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.933 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:28.933 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:28.933 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:28.933 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.933 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.933 [2024-11-15 11:03:35.819990] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:29.193 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.193 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:29.193 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:29.193 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:29.193 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:29.193 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:29.193 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:29.193 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:29.193 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.193 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.193 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.193 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:29.193 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.193 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.193 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.193 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.193 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.193 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.193 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.193 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.193 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.193 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.193 "name": "Existed_Raid", 00:18:29.193 "uuid": "bfd65390-daa6-4f71-89cb-849c40002dae", 00:18:29.193 "strip_size_kb": 0, 00:18:29.193 "state": "online", 00:18:29.193 "raid_level": "raid1", 00:18:29.193 "superblock": true, 00:18:29.193 "num_base_bdevs": 2, 00:18:29.193 "num_base_bdevs_discovered": 1, 00:18:29.193 "num_base_bdevs_operational": 1, 00:18:29.193 "base_bdevs_list": [ 00:18:29.193 { 00:18:29.193 "name": null, 00:18:29.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.193 "is_configured": false, 00:18:29.194 "data_offset": 0, 00:18:29.194 "data_size": 7936 00:18:29.194 }, 00:18:29.194 { 00:18:29.194 "name": "BaseBdev2", 00:18:29.194 "uuid": "bf42fe62-3f36-4d74-8a6b-09932091d66e", 00:18:29.194 "is_configured": true, 00:18:29.194 "data_offset": 256, 00:18:29.194 "data_size": 7936 00:18:29.194 } 00:18:29.194 ] 00:18:29.194 }' 00:18:29.194 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.194 11:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.451 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:29.451 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:29.451 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:29.451 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.451 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.451 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.451 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.710 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:29.710 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:29.710 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:29.710 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.710 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.710 [2024-11-15 11:03:36.390204] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:29.710 [2024-11-15 11:03:36.390339] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:29.710 [2024-11-15 11:03:36.488415] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:29.710 [2024-11-15 11:03:36.488467] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:29.710 [2024-11-15 11:03:36.488480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:29.710 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.710 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:29.710 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:29.710 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.710 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.710 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.710 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:29.710 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.710 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:29.710 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:29.710 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:29.710 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88646 00:18:29.710 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 88646 ']' 00:18:29.710 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 88646 00:18:29.710 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:18:29.710 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:29.710 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88646 00:18:29.710 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:29.710 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:29.710 killing process with pid 88646 00:18:29.710 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88646' 00:18:29.710 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 88646 00:18:29.710 [2024-11-15 11:03:36.590339] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:29.710 11:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 88646 00:18:29.710 [2024-11-15 11:03:36.607162] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:31.120 11:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:18:31.120 00:18:31.120 real 0m5.144s 00:18:31.120 user 0m7.414s 00:18:31.120 sys 0m0.924s 00:18:31.120 11:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:31.120 11:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.120 ************************************ 00:18:31.120 END TEST raid_state_function_test_sb_md_interleaved 00:18:31.120 ************************************ 00:18:31.120 11:03:37 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:18:31.120 11:03:37 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:31.120 11:03:37 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:31.120 11:03:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:31.120 ************************************ 00:18:31.120 START TEST raid_superblock_test_md_interleaved 00:18:31.120 ************************************ 00:18:31.120 11:03:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:18:31.120 11:03:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:31.120 11:03:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:31.120 11:03:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:31.120 11:03:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:31.120 11:03:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:31.120 11:03:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:31.120 11:03:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:31.121 11:03:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:31.121 11:03:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:31.121 11:03:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:31.121 11:03:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:31.121 11:03:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:31.121 11:03:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:31.121 11:03:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:31.121 11:03:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:31.121 11:03:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88898 00:18:31.121 11:03:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:31.121 11:03:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88898 00:18:31.121 11:03:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 88898 ']' 00:18:31.121 11:03:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.121 11:03:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:31.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.121 11:03:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.121 11:03:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:31.121 11:03:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.121 [2024-11-15 11:03:37.883012] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:18:31.121 [2024-11-15 11:03:37.883146] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88898 ] 00:18:31.121 [2024-11-15 11:03:38.038753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.380 [2024-11-15 11:03:38.152660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.640 [2024-11-15 11:03:38.344724] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:31.640 [2024-11-15 11:03:38.344797] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:31.899 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:31.899 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:18:31.899 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:31.899 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:31.899 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:31.899 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:31.899 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:31.899 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:31.899 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:31.899 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:31.899 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:31.899 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.899 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.899 malloc1 00:18:31.899 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.899 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:31.899 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.899 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.899 [2024-11-15 11:03:38.795135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:31.899 [2024-11-15 11:03:38.795213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.899 [2024-11-15 11:03:38.795236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:31.899 [2024-11-15 11:03:38.795245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.899 [2024-11-15 11:03:38.797256] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.899 [2024-11-15 11:03:38.797298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:31.899 pt1 00:18:31.899 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.899 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:31.899 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:31.899 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:31.899 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:31.899 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:31.899 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:31.899 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:31.899 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:31.899 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:31.899 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.899 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.160 malloc2 00:18:32.160 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.160 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:32.160 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.160 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.160 [2024-11-15 11:03:38.851702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:32.160 [2024-11-15 11:03:38.851766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:32.160 [2024-11-15 11:03:38.851804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:32.160 [2024-11-15 11:03:38.851815] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:32.160 [2024-11-15 11:03:38.853845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:32.160 [2024-11-15 11:03:38.853881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:32.160 pt2 00:18:32.160 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.160 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:32.160 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:32.160 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:32.160 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.160 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.160 [2024-11-15 11:03:38.863730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:32.160 [2024-11-15 11:03:38.865738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:32.160 [2024-11-15 11:03:38.865961] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:32.160 [2024-11-15 11:03:38.865974] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:32.160 [2024-11-15 11:03:38.866060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:32.160 [2024-11-15 11:03:38.866151] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:32.160 [2024-11-15 11:03:38.866168] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:32.160 [2024-11-15 11:03:38.866252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.160 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.160 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:32.160 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.160 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.160 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.160 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.160 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:32.160 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.160 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.160 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.160 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.160 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.160 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.160 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.160 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.160 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.160 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.160 "name": "raid_bdev1", 00:18:32.160 "uuid": "f296e2d5-84c1-45d2-88a2-217663560070", 00:18:32.160 "strip_size_kb": 0, 00:18:32.160 "state": "online", 00:18:32.160 "raid_level": "raid1", 00:18:32.160 "superblock": true, 00:18:32.160 "num_base_bdevs": 2, 00:18:32.160 "num_base_bdevs_discovered": 2, 00:18:32.160 "num_base_bdevs_operational": 2, 00:18:32.160 "base_bdevs_list": [ 00:18:32.160 { 00:18:32.160 "name": "pt1", 00:18:32.160 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:32.160 "is_configured": true, 00:18:32.160 "data_offset": 256, 00:18:32.160 "data_size": 7936 00:18:32.160 }, 00:18:32.160 { 00:18:32.160 "name": "pt2", 00:18:32.160 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:32.160 "is_configured": true, 00:18:32.160 "data_offset": 256, 00:18:32.160 "data_size": 7936 00:18:32.160 } 00:18:32.160 ] 00:18:32.160 }' 00:18:32.160 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.160 11:03:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.730 [2024-11-15 11:03:39.359149] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:32.730 "name": "raid_bdev1", 00:18:32.730 "aliases": [ 00:18:32.730 "f296e2d5-84c1-45d2-88a2-217663560070" 00:18:32.730 ], 00:18:32.730 "product_name": "Raid Volume", 00:18:32.730 "block_size": 4128, 00:18:32.730 "num_blocks": 7936, 00:18:32.730 "uuid": "f296e2d5-84c1-45d2-88a2-217663560070", 00:18:32.730 "md_size": 32, 00:18:32.730 "md_interleave": true, 00:18:32.730 "dif_type": 0, 00:18:32.730 "assigned_rate_limits": { 00:18:32.730 "rw_ios_per_sec": 0, 00:18:32.730 "rw_mbytes_per_sec": 0, 00:18:32.730 "r_mbytes_per_sec": 0, 00:18:32.730 "w_mbytes_per_sec": 0 00:18:32.730 }, 00:18:32.730 "claimed": false, 00:18:32.730 "zoned": false, 00:18:32.730 "supported_io_types": { 00:18:32.730 "read": true, 00:18:32.730 "write": true, 00:18:32.730 "unmap": false, 00:18:32.730 "flush": false, 00:18:32.730 "reset": true, 00:18:32.730 "nvme_admin": false, 00:18:32.730 "nvme_io": false, 00:18:32.730 "nvme_io_md": false, 00:18:32.730 "write_zeroes": true, 00:18:32.730 "zcopy": false, 00:18:32.730 "get_zone_info": false, 00:18:32.730 "zone_management": false, 00:18:32.730 "zone_append": false, 00:18:32.730 "compare": false, 00:18:32.730 "compare_and_write": false, 00:18:32.730 "abort": false, 00:18:32.730 "seek_hole": false, 00:18:32.730 "seek_data": false, 00:18:32.730 "copy": false, 00:18:32.730 "nvme_iov_md": false 00:18:32.730 }, 00:18:32.730 "memory_domains": [ 00:18:32.730 { 00:18:32.730 "dma_device_id": "system", 00:18:32.730 "dma_device_type": 1 00:18:32.730 }, 00:18:32.730 { 00:18:32.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.730 "dma_device_type": 2 00:18:32.730 }, 00:18:32.730 { 00:18:32.730 "dma_device_id": "system", 00:18:32.730 "dma_device_type": 1 00:18:32.730 }, 00:18:32.730 { 00:18:32.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.730 "dma_device_type": 2 00:18:32.730 } 00:18:32.730 ], 00:18:32.730 "driver_specific": { 00:18:32.730 "raid": { 00:18:32.730 "uuid": "f296e2d5-84c1-45d2-88a2-217663560070", 00:18:32.730 "strip_size_kb": 0, 00:18:32.730 "state": "online", 00:18:32.730 "raid_level": "raid1", 00:18:32.730 "superblock": true, 00:18:32.730 "num_base_bdevs": 2, 00:18:32.730 "num_base_bdevs_discovered": 2, 00:18:32.730 "num_base_bdevs_operational": 2, 00:18:32.730 "base_bdevs_list": [ 00:18:32.730 { 00:18:32.730 "name": "pt1", 00:18:32.730 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:32.730 "is_configured": true, 00:18:32.730 "data_offset": 256, 00:18:32.730 "data_size": 7936 00:18:32.730 }, 00:18:32.730 { 00:18:32.730 "name": "pt2", 00:18:32.730 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:32.730 "is_configured": true, 00:18:32.730 "data_offset": 256, 00:18:32.730 "data_size": 7936 00:18:32.730 } 00:18:32.730 ] 00:18:32.730 } 00:18:32.730 } 00:18:32.730 }' 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:32.730 pt2' 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.730 [2024-11-15 11:03:39.586725] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f296e2d5-84c1-45d2-88a2-217663560070 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z f296e2d5-84c1-45d2-88a2-217663560070 ']' 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.730 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.730 [2024-11-15 11:03:39.626407] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:32.730 [2024-11-15 11:03:39.626471] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:32.730 [2024-11-15 11:03:39.626582] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:32.731 [2024-11-15 11:03:39.626657] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:32.731 [2024-11-15 11:03:39.626693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:32.731 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.731 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:32.731 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.731 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.731 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.731 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.991 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:32.991 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:32.991 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:32.991 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:32.991 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.991 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.991 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.991 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:32.991 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:32.991 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.991 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.991 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.991 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:32.991 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.991 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.991 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:32.991 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.991 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:32.991 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:32.991 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:18:32.991 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:32.991 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:32.991 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:32.991 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:32.991 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:32.991 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:32.991 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.991 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.991 [2024-11-15 11:03:39.758210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:32.991 [2024-11-15 11:03:39.760242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:32.991 [2024-11-15 11:03:39.760390] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:32.991 [2024-11-15 11:03:39.760507] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:32.991 [2024-11-15 11:03:39.760564] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:32.991 [2024-11-15 11:03:39.760596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:32.991 request: 00:18:32.991 { 00:18:32.991 "name": "raid_bdev1", 00:18:32.991 "raid_level": "raid1", 00:18:32.991 "base_bdevs": [ 00:18:32.991 "malloc1", 00:18:32.991 "malloc2" 00:18:32.991 ], 00:18:32.991 "superblock": false, 00:18:32.992 "method": "bdev_raid_create", 00:18:32.992 "req_id": 1 00:18:32.992 } 00:18:32.992 Got JSON-RPC error response 00:18:32.992 response: 00:18:32.992 { 00:18:32.992 "code": -17, 00:18:32.992 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:32.992 } 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.992 [2024-11-15 11:03:39.826075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:32.992 [2024-11-15 11:03:39.826182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:32.992 [2024-11-15 11:03:39.826234] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:32.992 [2024-11-15 11:03:39.826272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:32.992 [2024-11-15 11:03:39.828314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:32.992 [2024-11-15 11:03:39.828409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:32.992 [2024-11-15 11:03:39.828504] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:32.992 [2024-11-15 11:03:39.828597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:32.992 pt1 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.992 "name": "raid_bdev1", 00:18:32.992 "uuid": "f296e2d5-84c1-45d2-88a2-217663560070", 00:18:32.992 "strip_size_kb": 0, 00:18:32.992 "state": "configuring", 00:18:32.992 "raid_level": "raid1", 00:18:32.992 "superblock": true, 00:18:32.992 "num_base_bdevs": 2, 00:18:32.992 "num_base_bdevs_discovered": 1, 00:18:32.992 "num_base_bdevs_operational": 2, 00:18:32.992 "base_bdevs_list": [ 00:18:32.992 { 00:18:32.992 "name": "pt1", 00:18:32.992 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:32.992 "is_configured": true, 00:18:32.992 "data_offset": 256, 00:18:32.992 "data_size": 7936 00:18:32.992 }, 00:18:32.992 { 00:18:32.992 "name": null, 00:18:32.992 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:32.992 "is_configured": false, 00:18:32.992 "data_offset": 256, 00:18:32.992 "data_size": 7936 00:18:32.992 } 00:18:32.992 ] 00:18:32.992 }' 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.992 11:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.559 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:33.560 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:33.560 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:33.560 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:33.560 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.560 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.560 [2024-11-15 11:03:40.333198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:33.560 [2024-11-15 11:03:40.333327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.560 [2024-11-15 11:03:40.333367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:33.560 [2024-11-15 11:03:40.333398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.560 [2024-11-15 11:03:40.333587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.560 [2024-11-15 11:03:40.333632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:33.560 [2024-11-15 11:03:40.333705] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:33.560 [2024-11-15 11:03:40.333754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:33.560 [2024-11-15 11:03:40.333875] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:33.560 [2024-11-15 11:03:40.333913] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:33.560 [2024-11-15 11:03:40.334008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:33.560 [2024-11-15 11:03:40.334116] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:33.560 [2024-11-15 11:03:40.334152] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:33.560 [2024-11-15 11:03:40.334255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.560 pt2 00:18:33.560 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.560 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:33.560 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:33.560 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:33.560 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.560 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.560 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.560 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.560 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:33.560 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.560 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.560 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.560 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.560 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.560 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.560 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.560 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.560 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.560 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.560 "name": "raid_bdev1", 00:18:33.560 "uuid": "f296e2d5-84c1-45d2-88a2-217663560070", 00:18:33.560 "strip_size_kb": 0, 00:18:33.560 "state": "online", 00:18:33.560 "raid_level": "raid1", 00:18:33.560 "superblock": true, 00:18:33.560 "num_base_bdevs": 2, 00:18:33.560 "num_base_bdevs_discovered": 2, 00:18:33.560 "num_base_bdevs_operational": 2, 00:18:33.560 "base_bdevs_list": [ 00:18:33.560 { 00:18:33.560 "name": "pt1", 00:18:33.560 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:33.560 "is_configured": true, 00:18:33.560 "data_offset": 256, 00:18:33.560 "data_size": 7936 00:18:33.560 }, 00:18:33.560 { 00:18:33.560 "name": "pt2", 00:18:33.560 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:33.560 "is_configured": true, 00:18:33.560 "data_offset": 256, 00:18:33.560 "data_size": 7936 00:18:33.560 } 00:18:33.560 ] 00:18:33.560 }' 00:18:33.560 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.560 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.129 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:34.129 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:34.129 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:34.129 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:34.129 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:34.129 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:34.129 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:34.129 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:34.129 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.129 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.129 [2024-11-15 11:03:40.776801] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:34.129 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.129 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:34.129 "name": "raid_bdev1", 00:18:34.129 "aliases": [ 00:18:34.129 "f296e2d5-84c1-45d2-88a2-217663560070" 00:18:34.129 ], 00:18:34.129 "product_name": "Raid Volume", 00:18:34.129 "block_size": 4128, 00:18:34.129 "num_blocks": 7936, 00:18:34.129 "uuid": "f296e2d5-84c1-45d2-88a2-217663560070", 00:18:34.129 "md_size": 32, 00:18:34.129 "md_interleave": true, 00:18:34.129 "dif_type": 0, 00:18:34.129 "assigned_rate_limits": { 00:18:34.129 "rw_ios_per_sec": 0, 00:18:34.129 "rw_mbytes_per_sec": 0, 00:18:34.129 "r_mbytes_per_sec": 0, 00:18:34.129 "w_mbytes_per_sec": 0 00:18:34.129 }, 00:18:34.129 "claimed": false, 00:18:34.129 "zoned": false, 00:18:34.129 "supported_io_types": { 00:18:34.129 "read": true, 00:18:34.129 "write": true, 00:18:34.129 "unmap": false, 00:18:34.129 "flush": false, 00:18:34.129 "reset": true, 00:18:34.129 "nvme_admin": false, 00:18:34.129 "nvme_io": false, 00:18:34.129 "nvme_io_md": false, 00:18:34.129 "write_zeroes": true, 00:18:34.129 "zcopy": false, 00:18:34.129 "get_zone_info": false, 00:18:34.129 "zone_management": false, 00:18:34.129 "zone_append": false, 00:18:34.129 "compare": false, 00:18:34.129 "compare_and_write": false, 00:18:34.129 "abort": false, 00:18:34.129 "seek_hole": false, 00:18:34.129 "seek_data": false, 00:18:34.129 "copy": false, 00:18:34.129 "nvme_iov_md": false 00:18:34.129 }, 00:18:34.129 "memory_domains": [ 00:18:34.129 { 00:18:34.129 "dma_device_id": "system", 00:18:34.129 "dma_device_type": 1 00:18:34.129 }, 00:18:34.129 { 00:18:34.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.129 "dma_device_type": 2 00:18:34.129 }, 00:18:34.129 { 00:18:34.129 "dma_device_id": "system", 00:18:34.129 "dma_device_type": 1 00:18:34.129 }, 00:18:34.129 { 00:18:34.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.129 "dma_device_type": 2 00:18:34.129 } 00:18:34.129 ], 00:18:34.129 "driver_specific": { 00:18:34.129 "raid": { 00:18:34.129 "uuid": "f296e2d5-84c1-45d2-88a2-217663560070", 00:18:34.129 "strip_size_kb": 0, 00:18:34.129 "state": "online", 00:18:34.129 "raid_level": "raid1", 00:18:34.129 "superblock": true, 00:18:34.129 "num_base_bdevs": 2, 00:18:34.129 "num_base_bdevs_discovered": 2, 00:18:34.129 "num_base_bdevs_operational": 2, 00:18:34.129 "base_bdevs_list": [ 00:18:34.129 { 00:18:34.129 "name": "pt1", 00:18:34.129 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:34.129 "is_configured": true, 00:18:34.129 "data_offset": 256, 00:18:34.129 "data_size": 7936 00:18:34.129 }, 00:18:34.129 { 00:18:34.129 "name": "pt2", 00:18:34.129 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:34.129 "is_configured": true, 00:18:34.129 "data_offset": 256, 00:18:34.129 "data_size": 7936 00:18:34.129 } 00:18:34.129 ] 00:18:34.129 } 00:18:34.129 } 00:18:34.129 }' 00:18:34.130 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:34.130 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:34.130 pt2' 00:18:34.130 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:34.130 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:34.130 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:34.130 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:34.130 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.130 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.130 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:34.130 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.130 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:34.130 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:34.130 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:34.130 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:34.130 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:34.130 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.130 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.130 11:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.130 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:34.130 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:34.130 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:34.130 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:34.130 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.130 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.130 [2024-11-15 11:03:41.024374] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:34.130 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.390 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' f296e2d5-84c1-45d2-88a2-217663560070 '!=' f296e2d5-84c1-45d2-88a2-217663560070 ']' 00:18:34.390 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:34.390 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:34.390 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:34.390 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:34.390 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.390 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.390 [2024-11-15 11:03:41.064060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:34.390 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.390 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:34.390 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.390 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.390 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.390 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.390 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:34.390 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.390 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.390 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.390 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.390 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.390 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.390 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.390 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.390 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.390 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.390 "name": "raid_bdev1", 00:18:34.390 "uuid": "f296e2d5-84c1-45d2-88a2-217663560070", 00:18:34.390 "strip_size_kb": 0, 00:18:34.390 "state": "online", 00:18:34.390 "raid_level": "raid1", 00:18:34.390 "superblock": true, 00:18:34.390 "num_base_bdevs": 2, 00:18:34.390 "num_base_bdevs_discovered": 1, 00:18:34.390 "num_base_bdevs_operational": 1, 00:18:34.390 "base_bdevs_list": [ 00:18:34.390 { 00:18:34.390 "name": null, 00:18:34.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.390 "is_configured": false, 00:18:34.390 "data_offset": 0, 00:18:34.390 "data_size": 7936 00:18:34.390 }, 00:18:34.390 { 00:18:34.390 "name": "pt2", 00:18:34.390 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:34.390 "is_configured": true, 00:18:34.390 "data_offset": 256, 00:18:34.390 "data_size": 7936 00:18:34.390 } 00:18:34.390 ] 00:18:34.390 }' 00:18:34.390 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.390 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.650 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:34.650 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.650 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.650 [2024-11-15 11:03:41.507237] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:34.650 [2024-11-15 11:03:41.507343] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:34.650 [2024-11-15 11:03:41.507447] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:34.650 [2024-11-15 11:03:41.507508] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:34.650 [2024-11-15 11:03:41.507568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:34.650 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.650 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.650 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:34.650 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.650 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.650 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.650 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:34.650 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:34.650 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:34.650 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:34.650 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:34.650 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.650 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.650 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.650 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:34.650 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:34.650 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:34.650 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:34.650 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:34.650 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:34.650 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.650 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.650 [2024-11-15 11:03:41.575120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:34.650 [2024-11-15 11:03:41.575218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.650 [2024-11-15 11:03:41.575251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:34.650 [2024-11-15 11:03:41.575280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.910 [2024-11-15 11:03:41.577207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.910 [2024-11-15 11:03:41.577249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:34.910 [2024-11-15 11:03:41.577317] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:34.910 [2024-11-15 11:03:41.577366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:34.910 [2024-11-15 11:03:41.577435] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:34.910 [2024-11-15 11:03:41.577447] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:34.910 [2024-11-15 11:03:41.577536] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:34.910 [2024-11-15 11:03:41.577604] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:34.910 [2024-11-15 11:03:41.577612] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:34.910 [2024-11-15 11:03:41.577675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.910 pt2 00:18:34.910 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.910 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:34.910 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.910 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.910 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.910 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.910 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:34.910 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.910 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.910 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.910 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.910 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.910 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.910 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.910 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.910 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.910 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.910 "name": "raid_bdev1", 00:18:34.910 "uuid": "f296e2d5-84c1-45d2-88a2-217663560070", 00:18:34.910 "strip_size_kb": 0, 00:18:34.910 "state": "online", 00:18:34.910 "raid_level": "raid1", 00:18:34.910 "superblock": true, 00:18:34.910 "num_base_bdevs": 2, 00:18:34.910 "num_base_bdevs_discovered": 1, 00:18:34.910 "num_base_bdevs_operational": 1, 00:18:34.910 "base_bdevs_list": [ 00:18:34.910 { 00:18:34.910 "name": null, 00:18:34.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.910 "is_configured": false, 00:18:34.910 "data_offset": 256, 00:18:34.910 "data_size": 7936 00:18:34.910 }, 00:18:34.910 { 00:18:34.910 "name": "pt2", 00:18:34.910 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:34.910 "is_configured": true, 00:18:34.910 "data_offset": 256, 00:18:34.910 "data_size": 7936 00:18:34.910 } 00:18:34.910 ] 00:18:34.910 }' 00:18:34.910 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.910 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.170 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:35.170 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.170 11:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.170 [2024-11-15 11:03:42.002407] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:35.170 [2024-11-15 11:03:42.002484] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:35.170 [2024-11-15 11:03:42.002608] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:35.170 [2024-11-15 11:03:42.002676] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:35.170 [2024-11-15 11:03:42.002727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.170 [2024-11-15 11:03:42.062312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:35.170 [2024-11-15 11:03:42.062431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.170 [2024-11-15 11:03:42.062473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:35.170 [2024-11-15 11:03:42.062500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.170 [2024-11-15 11:03:42.064434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.170 [2024-11-15 11:03:42.064506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:35.170 [2024-11-15 11:03:42.064591] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:35.170 [2024-11-15 11:03:42.064662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:35.170 [2024-11-15 11:03:42.064787] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:35.170 [2024-11-15 11:03:42.064836] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:35.170 [2024-11-15 11:03:42.064897] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:35.170 [2024-11-15 11:03:42.065007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:35.170 [2024-11-15 11:03:42.065111] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:35.170 [2024-11-15 11:03:42.065148] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:35.170 [2024-11-15 11:03:42.065225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:35.170 [2024-11-15 11:03:42.065332] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:35.170 [2024-11-15 11:03:42.065373] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:35.170 [2024-11-15 11:03:42.065481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.170 pt1 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.170 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.170 "name": "raid_bdev1", 00:18:35.170 "uuid": "f296e2d5-84c1-45d2-88a2-217663560070", 00:18:35.170 "strip_size_kb": 0, 00:18:35.170 "state": "online", 00:18:35.170 "raid_level": "raid1", 00:18:35.170 "superblock": true, 00:18:35.170 "num_base_bdevs": 2, 00:18:35.170 "num_base_bdevs_discovered": 1, 00:18:35.170 "num_base_bdevs_operational": 1, 00:18:35.170 "base_bdevs_list": [ 00:18:35.170 { 00:18:35.170 "name": null, 00:18:35.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.171 "is_configured": false, 00:18:35.171 "data_offset": 256, 00:18:35.171 "data_size": 7936 00:18:35.171 }, 00:18:35.171 { 00:18:35.171 "name": "pt2", 00:18:35.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:35.171 "is_configured": true, 00:18:35.171 "data_offset": 256, 00:18:35.171 "data_size": 7936 00:18:35.171 } 00:18:35.171 ] 00:18:35.171 }' 00:18:35.171 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.171 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.740 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:35.740 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:35.740 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.740 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.740 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.740 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:35.740 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:35.740 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.740 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:35.740 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.740 [2024-11-15 11:03:42.573671] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:35.740 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.740 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' f296e2d5-84c1-45d2-88a2-217663560070 '!=' f296e2d5-84c1-45d2-88a2-217663560070 ']' 00:18:35.740 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88898 00:18:35.740 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 88898 ']' 00:18:35.740 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 88898 00:18:35.740 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:18:35.740 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:35.740 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88898 00:18:35.740 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:35.740 killing process with pid 88898 00:18:35.740 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:35.740 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88898' 00:18:35.740 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@971 -- # kill 88898 00:18:35.740 [2024-11-15 11:03:42.641948] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:35.740 [2024-11-15 11:03:42.642047] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:35.740 [2024-11-15 11:03:42.642101] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:35.740 11:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@976 -- # wait 88898 00:18:35.740 [2024-11-15 11:03:42.642117] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:36.000 [2024-11-15 11:03:42.847068] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:37.381 ************************************ 00:18:37.381 END TEST raid_superblock_test_md_interleaved 00:18:37.381 ************************************ 00:18:37.381 11:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:37.381 00:18:37.381 real 0m6.155s 00:18:37.381 user 0m9.377s 00:18:37.381 sys 0m1.090s 00:18:37.381 11:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:37.381 11:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.381 11:03:44 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:37.381 11:03:44 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:18:37.381 11:03:44 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:37.381 11:03:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:37.381 ************************************ 00:18:37.381 START TEST raid_rebuild_test_sb_md_interleaved 00:18:37.381 ************************************ 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false false 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:37.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89221 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89221 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 89221 ']' 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:37.381 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.381 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:37.381 Zero copy mechanism will not be used. 00:18:37.381 [2024-11-15 11:03:44.120829] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:18:37.381 [2024-11-15 11:03:44.121071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89221 ] 00:18:37.381 [2024-11-15 11:03:44.298128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.641 [2024-11-15 11:03:44.411863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.900 [2024-11-15 11:03:44.624551] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:37.900 [2024-11-15 11:03:44.624641] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:38.160 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:38.160 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:18:38.160 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:38.160 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:38.160 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.160 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.160 BaseBdev1_malloc 00:18:38.160 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.160 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:38.160 11:03:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.160 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.160 [2024-11-15 11:03:45.007255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:38.160 [2024-11-15 11:03:45.007388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.160 [2024-11-15 11:03:45.007451] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:38.160 [2024-11-15 11:03:45.007535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.160 [2024-11-15 11:03:45.009661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.160 [2024-11-15 11:03:45.009742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:38.160 BaseBdev1 00:18:38.160 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.160 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:38.160 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:38.160 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.160 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.160 BaseBdev2_malloc 00:18:38.160 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.160 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:38.160 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.160 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.160 [2024-11-15 11:03:45.059042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:38.160 [2024-11-15 11:03:45.059160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.160 [2024-11-15 11:03:45.059197] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:38.160 [2024-11-15 11:03:45.059233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.160 [2024-11-15 11:03:45.061077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.160 [2024-11-15 11:03:45.061155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:38.160 BaseBdev2 00:18:38.160 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.160 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:38.160 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.160 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.420 spare_malloc 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.420 spare_delay 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.420 [2024-11-15 11:03:45.139343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:38.420 [2024-11-15 11:03:45.139404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.420 [2024-11-15 11:03:45.139423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:38.420 [2024-11-15 11:03:45.139434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.420 [2024-11-15 11:03:45.141238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.420 [2024-11-15 11:03:45.141280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:38.420 spare 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.420 [2024-11-15 11:03:45.151365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:38.420 [2024-11-15 11:03:45.153167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:38.420 [2024-11-15 11:03:45.153418] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:38.420 [2024-11-15 11:03:45.153471] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:38.420 [2024-11-15 11:03:45.153577] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:38.420 [2024-11-15 11:03:45.153688] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:38.420 [2024-11-15 11:03:45.153722] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:38.420 [2024-11-15 11:03:45.153830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.420 "name": "raid_bdev1", 00:18:38.420 "uuid": "52151c7f-b7e6-455e-9ad3-bada312a11f2", 00:18:38.420 "strip_size_kb": 0, 00:18:38.420 "state": "online", 00:18:38.420 "raid_level": "raid1", 00:18:38.420 "superblock": true, 00:18:38.420 "num_base_bdevs": 2, 00:18:38.420 "num_base_bdevs_discovered": 2, 00:18:38.420 "num_base_bdevs_operational": 2, 00:18:38.420 "base_bdevs_list": [ 00:18:38.420 { 00:18:38.420 "name": "BaseBdev1", 00:18:38.420 "uuid": "6b7ee857-7e77-530f-8072-ed54ac9d5de6", 00:18:38.420 "is_configured": true, 00:18:38.420 "data_offset": 256, 00:18:38.420 "data_size": 7936 00:18:38.420 }, 00:18:38.420 { 00:18:38.420 "name": "BaseBdev2", 00:18:38.420 "uuid": "f779ee58-cc41-5fb8-b980-492daacf1c4a", 00:18:38.420 "is_configured": true, 00:18:38.420 "data_offset": 256, 00:18:38.420 "data_size": 7936 00:18:38.420 } 00:18:38.420 ] 00:18:38.420 }' 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.420 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.680 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:38.680 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.680 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.680 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:38.680 [2024-11-15 11:03:45.578960] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:38.680 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.943 [2024-11-15 11:03:45.678473] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.943 "name": "raid_bdev1", 00:18:38.943 "uuid": "52151c7f-b7e6-455e-9ad3-bada312a11f2", 00:18:38.943 "strip_size_kb": 0, 00:18:38.943 "state": "online", 00:18:38.943 "raid_level": "raid1", 00:18:38.943 "superblock": true, 00:18:38.943 "num_base_bdevs": 2, 00:18:38.943 "num_base_bdevs_discovered": 1, 00:18:38.943 "num_base_bdevs_operational": 1, 00:18:38.943 "base_bdevs_list": [ 00:18:38.943 { 00:18:38.943 "name": null, 00:18:38.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.943 "is_configured": false, 00:18:38.943 "data_offset": 0, 00:18:38.943 "data_size": 7936 00:18:38.943 }, 00:18:38.943 { 00:18:38.943 "name": "BaseBdev2", 00:18:38.943 "uuid": "f779ee58-cc41-5fb8-b980-492daacf1c4a", 00:18:38.943 "is_configured": true, 00:18:38.943 "data_offset": 256, 00:18:38.943 "data_size": 7936 00:18:38.943 } 00:18:38.943 ] 00:18:38.943 }' 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.943 11:03:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.208 11:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:39.208 11:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.208 11:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.468 [2024-11-15 11:03:46.137705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:39.468 [2024-11-15 11:03:46.156459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:39.468 11:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.468 11:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:39.468 [2024-11-15 11:03:46.158287] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:40.403 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:40.403 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.403 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:40.403 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:40.404 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.404 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.404 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.404 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.404 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.404 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.404 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.404 "name": "raid_bdev1", 00:18:40.404 "uuid": "52151c7f-b7e6-455e-9ad3-bada312a11f2", 00:18:40.404 "strip_size_kb": 0, 00:18:40.404 "state": "online", 00:18:40.404 "raid_level": "raid1", 00:18:40.404 "superblock": true, 00:18:40.404 "num_base_bdevs": 2, 00:18:40.404 "num_base_bdevs_discovered": 2, 00:18:40.404 "num_base_bdevs_operational": 2, 00:18:40.404 "process": { 00:18:40.404 "type": "rebuild", 00:18:40.404 "target": "spare", 00:18:40.404 "progress": { 00:18:40.404 "blocks": 2560, 00:18:40.404 "percent": 32 00:18:40.404 } 00:18:40.404 }, 00:18:40.404 "base_bdevs_list": [ 00:18:40.404 { 00:18:40.404 "name": "spare", 00:18:40.404 "uuid": "8f85fcec-1fcc-5630-a19c-cb9a775bfd21", 00:18:40.404 "is_configured": true, 00:18:40.404 "data_offset": 256, 00:18:40.404 "data_size": 7936 00:18:40.404 }, 00:18:40.404 { 00:18:40.404 "name": "BaseBdev2", 00:18:40.404 "uuid": "f779ee58-cc41-5fb8-b980-492daacf1c4a", 00:18:40.404 "is_configured": true, 00:18:40.404 "data_offset": 256, 00:18:40.404 "data_size": 7936 00:18:40.404 } 00:18:40.404 ] 00:18:40.404 }' 00:18:40.404 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.404 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:40.404 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.404 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:40.404 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:40.404 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.404 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.404 [2024-11-15 11:03:47.297699] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:40.663 [2024-11-15 11:03:47.363887] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:40.663 [2024-11-15 11:03:47.364005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.663 [2024-11-15 11:03:47.364041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:40.663 [2024-11-15 11:03:47.364068] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:40.663 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.663 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:40.663 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.663 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.663 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.663 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.663 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:40.663 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.663 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.663 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.663 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.663 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.663 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.663 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.663 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.663 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.663 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.663 "name": "raid_bdev1", 00:18:40.663 "uuid": "52151c7f-b7e6-455e-9ad3-bada312a11f2", 00:18:40.663 "strip_size_kb": 0, 00:18:40.663 "state": "online", 00:18:40.663 "raid_level": "raid1", 00:18:40.663 "superblock": true, 00:18:40.663 "num_base_bdevs": 2, 00:18:40.663 "num_base_bdevs_discovered": 1, 00:18:40.663 "num_base_bdevs_operational": 1, 00:18:40.663 "base_bdevs_list": [ 00:18:40.663 { 00:18:40.663 "name": null, 00:18:40.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.663 "is_configured": false, 00:18:40.663 "data_offset": 0, 00:18:40.663 "data_size": 7936 00:18:40.663 }, 00:18:40.663 { 00:18:40.663 "name": "BaseBdev2", 00:18:40.663 "uuid": "f779ee58-cc41-5fb8-b980-492daacf1c4a", 00:18:40.663 "is_configured": true, 00:18:40.663 "data_offset": 256, 00:18:40.663 "data_size": 7936 00:18:40.663 } 00:18:40.663 ] 00:18:40.663 }' 00:18:40.663 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.663 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.921 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:40.921 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.921 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:40.921 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:40.921 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.921 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.921 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.921 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.921 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.921 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.181 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.181 "name": "raid_bdev1", 00:18:41.181 "uuid": "52151c7f-b7e6-455e-9ad3-bada312a11f2", 00:18:41.181 "strip_size_kb": 0, 00:18:41.181 "state": "online", 00:18:41.181 "raid_level": "raid1", 00:18:41.181 "superblock": true, 00:18:41.181 "num_base_bdevs": 2, 00:18:41.181 "num_base_bdevs_discovered": 1, 00:18:41.181 "num_base_bdevs_operational": 1, 00:18:41.181 "base_bdevs_list": [ 00:18:41.181 { 00:18:41.181 "name": null, 00:18:41.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.181 "is_configured": false, 00:18:41.181 "data_offset": 0, 00:18:41.181 "data_size": 7936 00:18:41.181 }, 00:18:41.181 { 00:18:41.181 "name": "BaseBdev2", 00:18:41.181 "uuid": "f779ee58-cc41-5fb8-b980-492daacf1c4a", 00:18:41.181 "is_configured": true, 00:18:41.181 "data_offset": 256, 00:18:41.181 "data_size": 7936 00:18:41.181 } 00:18:41.181 ] 00:18:41.181 }' 00:18:41.181 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.181 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:41.181 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.181 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:41.181 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:41.181 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.181 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.181 [2024-11-15 11:03:47.963164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:41.181 [2024-11-15 11:03:47.981750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:41.181 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.181 11:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:41.181 [2024-11-15 11:03:47.983819] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:42.118 11:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.118 11:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.118 11:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.118 11:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.118 11:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.118 11:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.118 11:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.118 11:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.118 11:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.118 11:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.118 11:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.118 "name": "raid_bdev1", 00:18:42.118 "uuid": "52151c7f-b7e6-455e-9ad3-bada312a11f2", 00:18:42.118 "strip_size_kb": 0, 00:18:42.118 "state": "online", 00:18:42.118 "raid_level": "raid1", 00:18:42.118 "superblock": true, 00:18:42.118 "num_base_bdevs": 2, 00:18:42.118 "num_base_bdevs_discovered": 2, 00:18:42.118 "num_base_bdevs_operational": 2, 00:18:42.118 "process": { 00:18:42.118 "type": "rebuild", 00:18:42.118 "target": "spare", 00:18:42.118 "progress": { 00:18:42.118 "blocks": 2560, 00:18:42.118 "percent": 32 00:18:42.118 } 00:18:42.118 }, 00:18:42.118 "base_bdevs_list": [ 00:18:42.118 { 00:18:42.118 "name": "spare", 00:18:42.118 "uuid": "8f85fcec-1fcc-5630-a19c-cb9a775bfd21", 00:18:42.118 "is_configured": true, 00:18:42.119 "data_offset": 256, 00:18:42.119 "data_size": 7936 00:18:42.119 }, 00:18:42.119 { 00:18:42.119 "name": "BaseBdev2", 00:18:42.119 "uuid": "f779ee58-cc41-5fb8-b980-492daacf1c4a", 00:18:42.119 "is_configured": true, 00:18:42.119 "data_offset": 256, 00:18:42.119 "data_size": 7936 00:18:42.119 } 00:18:42.119 ] 00:18:42.119 }' 00:18:42.119 11:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.379 11:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:42.379 11:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.379 11:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.379 11:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:42.379 11:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:42.379 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:42.379 11:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:42.379 11:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:42.379 11:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:42.379 11:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=748 00:18:42.379 11:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:42.379 11:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.379 11:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.379 11:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.379 11:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.379 11:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.379 11:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.379 11:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.379 11:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.379 11:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.379 11:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.379 11:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.379 "name": "raid_bdev1", 00:18:42.379 "uuid": "52151c7f-b7e6-455e-9ad3-bada312a11f2", 00:18:42.379 "strip_size_kb": 0, 00:18:42.379 "state": "online", 00:18:42.379 "raid_level": "raid1", 00:18:42.379 "superblock": true, 00:18:42.379 "num_base_bdevs": 2, 00:18:42.379 "num_base_bdevs_discovered": 2, 00:18:42.379 "num_base_bdevs_operational": 2, 00:18:42.379 "process": { 00:18:42.379 "type": "rebuild", 00:18:42.379 "target": "spare", 00:18:42.379 "progress": { 00:18:42.379 "blocks": 2816, 00:18:42.379 "percent": 35 00:18:42.379 } 00:18:42.379 }, 00:18:42.379 "base_bdevs_list": [ 00:18:42.379 { 00:18:42.379 "name": "spare", 00:18:42.379 "uuid": "8f85fcec-1fcc-5630-a19c-cb9a775bfd21", 00:18:42.379 "is_configured": true, 00:18:42.379 "data_offset": 256, 00:18:42.379 "data_size": 7936 00:18:42.379 }, 00:18:42.379 { 00:18:42.379 "name": "BaseBdev2", 00:18:42.379 "uuid": "f779ee58-cc41-5fb8-b980-492daacf1c4a", 00:18:42.379 "is_configured": true, 00:18:42.379 "data_offset": 256, 00:18:42.379 "data_size": 7936 00:18:42.379 } 00:18:42.379 ] 00:18:42.379 }' 00:18:42.379 11:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.379 11:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:42.379 11:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.379 11:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.379 11:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:43.758 11:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:43.758 11:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:43.758 11:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.758 11:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:43.758 11:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:43.758 11:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.758 11:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.758 11:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.758 11:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.758 11:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.758 11:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.758 11:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.758 "name": "raid_bdev1", 00:18:43.758 "uuid": "52151c7f-b7e6-455e-9ad3-bada312a11f2", 00:18:43.758 "strip_size_kb": 0, 00:18:43.758 "state": "online", 00:18:43.758 "raid_level": "raid1", 00:18:43.758 "superblock": true, 00:18:43.758 "num_base_bdevs": 2, 00:18:43.758 "num_base_bdevs_discovered": 2, 00:18:43.758 "num_base_bdevs_operational": 2, 00:18:43.758 "process": { 00:18:43.758 "type": "rebuild", 00:18:43.758 "target": "spare", 00:18:43.758 "progress": { 00:18:43.758 "blocks": 5632, 00:18:43.758 "percent": 70 00:18:43.758 } 00:18:43.758 }, 00:18:43.758 "base_bdevs_list": [ 00:18:43.758 { 00:18:43.758 "name": "spare", 00:18:43.758 "uuid": "8f85fcec-1fcc-5630-a19c-cb9a775bfd21", 00:18:43.758 "is_configured": true, 00:18:43.758 "data_offset": 256, 00:18:43.758 "data_size": 7936 00:18:43.758 }, 00:18:43.758 { 00:18:43.758 "name": "BaseBdev2", 00:18:43.758 "uuid": "f779ee58-cc41-5fb8-b980-492daacf1c4a", 00:18:43.758 "is_configured": true, 00:18:43.758 "data_offset": 256, 00:18:43.758 "data_size": 7936 00:18:43.758 } 00:18:43.758 ] 00:18:43.758 }' 00:18:43.758 11:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.758 11:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:43.758 11:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.758 11:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:43.758 11:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:44.327 [2024-11-15 11:03:51.097867] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:44.327 [2024-11-15 11:03:51.097949] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:44.327 [2024-11-15 11:03:51.098074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.586 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:44.586 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:44.586 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.586 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:44.586 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:44.586 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.586 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.586 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.586 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.586 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.586 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.586 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.586 "name": "raid_bdev1", 00:18:44.586 "uuid": "52151c7f-b7e6-455e-9ad3-bada312a11f2", 00:18:44.586 "strip_size_kb": 0, 00:18:44.586 "state": "online", 00:18:44.586 "raid_level": "raid1", 00:18:44.586 "superblock": true, 00:18:44.586 "num_base_bdevs": 2, 00:18:44.586 "num_base_bdevs_discovered": 2, 00:18:44.586 "num_base_bdevs_operational": 2, 00:18:44.586 "base_bdevs_list": [ 00:18:44.586 { 00:18:44.586 "name": "spare", 00:18:44.586 "uuid": "8f85fcec-1fcc-5630-a19c-cb9a775bfd21", 00:18:44.586 "is_configured": true, 00:18:44.586 "data_offset": 256, 00:18:44.586 "data_size": 7936 00:18:44.586 }, 00:18:44.586 { 00:18:44.586 "name": "BaseBdev2", 00:18:44.586 "uuid": "f779ee58-cc41-5fb8-b980-492daacf1c4a", 00:18:44.586 "is_configured": true, 00:18:44.586 "data_offset": 256, 00:18:44.586 "data_size": 7936 00:18:44.586 } 00:18:44.586 ] 00:18:44.586 }' 00:18:44.586 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.847 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:44.847 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.847 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.848 "name": "raid_bdev1", 00:18:44.848 "uuid": "52151c7f-b7e6-455e-9ad3-bada312a11f2", 00:18:44.848 "strip_size_kb": 0, 00:18:44.848 "state": "online", 00:18:44.848 "raid_level": "raid1", 00:18:44.848 "superblock": true, 00:18:44.848 "num_base_bdevs": 2, 00:18:44.848 "num_base_bdevs_discovered": 2, 00:18:44.848 "num_base_bdevs_operational": 2, 00:18:44.848 "base_bdevs_list": [ 00:18:44.848 { 00:18:44.848 "name": "spare", 00:18:44.848 "uuid": "8f85fcec-1fcc-5630-a19c-cb9a775bfd21", 00:18:44.848 "is_configured": true, 00:18:44.848 "data_offset": 256, 00:18:44.848 "data_size": 7936 00:18:44.848 }, 00:18:44.848 { 00:18:44.848 "name": "BaseBdev2", 00:18:44.848 "uuid": "f779ee58-cc41-5fb8-b980-492daacf1c4a", 00:18:44.848 "is_configured": true, 00:18:44.848 "data_offset": 256, 00:18:44.848 "data_size": 7936 00:18:44.848 } 00:18:44.848 ] 00:18:44.848 }' 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.848 "name": "raid_bdev1", 00:18:44.848 "uuid": "52151c7f-b7e6-455e-9ad3-bada312a11f2", 00:18:44.848 "strip_size_kb": 0, 00:18:44.848 "state": "online", 00:18:44.848 "raid_level": "raid1", 00:18:44.848 "superblock": true, 00:18:44.848 "num_base_bdevs": 2, 00:18:44.848 "num_base_bdevs_discovered": 2, 00:18:44.848 "num_base_bdevs_operational": 2, 00:18:44.848 "base_bdevs_list": [ 00:18:44.848 { 00:18:44.848 "name": "spare", 00:18:44.848 "uuid": "8f85fcec-1fcc-5630-a19c-cb9a775bfd21", 00:18:44.848 "is_configured": true, 00:18:44.848 "data_offset": 256, 00:18:44.848 "data_size": 7936 00:18:44.848 }, 00:18:44.848 { 00:18:44.848 "name": "BaseBdev2", 00:18:44.848 "uuid": "f779ee58-cc41-5fb8-b980-492daacf1c4a", 00:18:44.848 "is_configured": true, 00:18:44.848 "data_offset": 256, 00:18:44.848 "data_size": 7936 00:18:44.848 } 00:18:44.848 ] 00:18:44.848 }' 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.848 11:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.418 [2024-11-15 11:03:52.128491] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:45.418 [2024-11-15 11:03:52.128579] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:45.418 [2024-11-15 11:03:52.128723] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:45.418 [2024-11-15 11:03:52.128831] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:45.418 [2024-11-15 11:03:52.128871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.418 [2024-11-15 11:03:52.204442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:45.418 [2024-11-15 11:03:52.204558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.418 [2024-11-15 11:03:52.204587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:45.418 [2024-11-15 11:03:52.204598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.418 [2024-11-15 11:03:52.206776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.418 [2024-11-15 11:03:52.206817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:45.418 [2024-11-15 11:03:52.206885] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:45.418 [2024-11-15 11:03:52.206953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:45.418 [2024-11-15 11:03:52.207073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:45.418 spare 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.418 [2024-11-15 11:03:52.306989] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:45.418 [2024-11-15 11:03:52.307122] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:45.418 [2024-11-15 11:03:52.307294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:45.418 [2024-11-15 11:03:52.307468] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:45.418 [2024-11-15 11:03:52.307482] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:45.418 [2024-11-15 11:03:52.307588] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.418 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.678 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.678 "name": "raid_bdev1", 00:18:45.678 "uuid": "52151c7f-b7e6-455e-9ad3-bada312a11f2", 00:18:45.678 "strip_size_kb": 0, 00:18:45.678 "state": "online", 00:18:45.678 "raid_level": "raid1", 00:18:45.678 "superblock": true, 00:18:45.678 "num_base_bdevs": 2, 00:18:45.678 "num_base_bdevs_discovered": 2, 00:18:45.678 "num_base_bdevs_operational": 2, 00:18:45.678 "base_bdevs_list": [ 00:18:45.678 { 00:18:45.678 "name": "spare", 00:18:45.678 "uuid": "8f85fcec-1fcc-5630-a19c-cb9a775bfd21", 00:18:45.678 "is_configured": true, 00:18:45.678 "data_offset": 256, 00:18:45.678 "data_size": 7936 00:18:45.678 }, 00:18:45.678 { 00:18:45.678 "name": "BaseBdev2", 00:18:45.678 "uuid": "f779ee58-cc41-5fb8-b980-492daacf1c4a", 00:18:45.678 "is_configured": true, 00:18:45.678 "data_offset": 256, 00:18:45.678 "data_size": 7936 00:18:45.678 } 00:18:45.678 ] 00:18:45.678 }' 00:18:45.678 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.678 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.936 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:45.936 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.936 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:45.936 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:45.936 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.936 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.936 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.936 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.936 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.936 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.936 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.936 "name": "raid_bdev1", 00:18:45.936 "uuid": "52151c7f-b7e6-455e-9ad3-bada312a11f2", 00:18:45.936 "strip_size_kb": 0, 00:18:45.936 "state": "online", 00:18:45.936 "raid_level": "raid1", 00:18:45.936 "superblock": true, 00:18:45.936 "num_base_bdevs": 2, 00:18:45.936 "num_base_bdevs_discovered": 2, 00:18:45.936 "num_base_bdevs_operational": 2, 00:18:45.936 "base_bdevs_list": [ 00:18:45.936 { 00:18:45.936 "name": "spare", 00:18:45.936 "uuid": "8f85fcec-1fcc-5630-a19c-cb9a775bfd21", 00:18:45.936 "is_configured": true, 00:18:45.936 "data_offset": 256, 00:18:45.936 "data_size": 7936 00:18:45.936 }, 00:18:45.936 { 00:18:45.936 "name": "BaseBdev2", 00:18:45.936 "uuid": "f779ee58-cc41-5fb8-b980-492daacf1c4a", 00:18:45.936 "is_configured": true, 00:18:45.936 "data_offset": 256, 00:18:45.936 "data_size": 7936 00:18:45.936 } 00:18:45.936 ] 00:18:45.936 }' 00:18:45.936 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:46.194 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:46.194 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:46.194 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:46.194 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.194 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:46.194 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.194 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.194 11:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.194 11:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:46.194 11:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:46.194 11:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.194 11:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.194 [2024-11-15 11:03:53.011192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:46.194 11:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.194 11:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:46.194 11:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.194 11:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.194 11:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.194 11:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.194 11:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:46.194 11:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.194 11:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.194 11:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.194 11:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.194 11:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.194 11:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.194 11:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.194 11:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.194 11:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.194 11:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.194 "name": "raid_bdev1", 00:18:46.194 "uuid": "52151c7f-b7e6-455e-9ad3-bada312a11f2", 00:18:46.194 "strip_size_kb": 0, 00:18:46.194 "state": "online", 00:18:46.194 "raid_level": "raid1", 00:18:46.194 "superblock": true, 00:18:46.194 "num_base_bdevs": 2, 00:18:46.194 "num_base_bdevs_discovered": 1, 00:18:46.194 "num_base_bdevs_operational": 1, 00:18:46.194 "base_bdevs_list": [ 00:18:46.194 { 00:18:46.194 "name": null, 00:18:46.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.194 "is_configured": false, 00:18:46.194 "data_offset": 0, 00:18:46.194 "data_size": 7936 00:18:46.194 }, 00:18:46.194 { 00:18:46.194 "name": "BaseBdev2", 00:18:46.194 "uuid": "f779ee58-cc41-5fb8-b980-492daacf1c4a", 00:18:46.194 "is_configured": true, 00:18:46.194 "data_offset": 256, 00:18:46.194 "data_size": 7936 00:18:46.194 } 00:18:46.194 ] 00:18:46.194 }' 00:18:46.194 11:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.194 11:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.761 11:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:46.761 11:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.761 11:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.761 [2024-11-15 11:03:53.478427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:46.761 [2024-11-15 11:03:53.478703] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:46.761 [2024-11-15 11:03:53.478766] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:46.761 [2024-11-15 11:03:53.478809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:46.761 [2024-11-15 11:03:53.496063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:46.761 11:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.761 11:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:46.761 [2024-11-15 11:03:53.498005] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:47.697 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:47.697 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.697 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:47.697 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:47.697 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.697 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.697 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.697 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.697 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.697 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.697 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.697 "name": "raid_bdev1", 00:18:47.697 "uuid": "52151c7f-b7e6-455e-9ad3-bada312a11f2", 00:18:47.697 "strip_size_kb": 0, 00:18:47.697 "state": "online", 00:18:47.697 "raid_level": "raid1", 00:18:47.697 "superblock": true, 00:18:47.697 "num_base_bdevs": 2, 00:18:47.697 "num_base_bdevs_discovered": 2, 00:18:47.697 "num_base_bdevs_operational": 2, 00:18:47.697 "process": { 00:18:47.697 "type": "rebuild", 00:18:47.697 "target": "spare", 00:18:47.697 "progress": { 00:18:47.697 "blocks": 2560, 00:18:47.697 "percent": 32 00:18:47.697 } 00:18:47.697 }, 00:18:47.697 "base_bdevs_list": [ 00:18:47.697 { 00:18:47.697 "name": "spare", 00:18:47.697 "uuid": "8f85fcec-1fcc-5630-a19c-cb9a775bfd21", 00:18:47.697 "is_configured": true, 00:18:47.697 "data_offset": 256, 00:18:47.697 "data_size": 7936 00:18:47.697 }, 00:18:47.697 { 00:18:47.697 "name": "BaseBdev2", 00:18:47.697 "uuid": "f779ee58-cc41-5fb8-b980-492daacf1c4a", 00:18:47.697 "is_configured": true, 00:18:47.697 "data_offset": 256, 00:18:47.697 "data_size": 7936 00:18:47.697 } 00:18:47.697 ] 00:18:47.697 }' 00:18:47.697 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.697 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:47.697 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.953 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:47.953 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:47.953 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.953 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.953 [2024-11-15 11:03:54.641520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:47.953 [2024-11-15 11:03:54.703756] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:47.953 [2024-11-15 11:03:54.703947] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.953 [2024-11-15 11:03:54.704008] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:47.953 [2024-11-15 11:03:54.704032] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:47.953 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.953 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:47.953 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.953 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.953 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.953 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.953 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:47.953 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.953 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.953 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.953 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.953 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.953 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.953 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.953 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.953 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.953 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.953 "name": "raid_bdev1", 00:18:47.953 "uuid": "52151c7f-b7e6-455e-9ad3-bada312a11f2", 00:18:47.953 "strip_size_kb": 0, 00:18:47.953 "state": "online", 00:18:47.953 "raid_level": "raid1", 00:18:47.953 "superblock": true, 00:18:47.953 "num_base_bdevs": 2, 00:18:47.953 "num_base_bdevs_discovered": 1, 00:18:47.953 "num_base_bdevs_operational": 1, 00:18:47.953 "base_bdevs_list": [ 00:18:47.953 { 00:18:47.953 "name": null, 00:18:47.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.953 "is_configured": false, 00:18:47.953 "data_offset": 0, 00:18:47.953 "data_size": 7936 00:18:47.953 }, 00:18:47.953 { 00:18:47.953 "name": "BaseBdev2", 00:18:47.953 "uuid": "f779ee58-cc41-5fb8-b980-492daacf1c4a", 00:18:47.953 "is_configured": true, 00:18:47.953 "data_offset": 256, 00:18:47.953 "data_size": 7936 00:18:47.953 } 00:18:47.953 ] 00:18:47.953 }' 00:18:47.953 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.953 11:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.520 11:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:48.520 11:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.520 11:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.520 [2024-11-15 11:03:55.215631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:48.521 [2024-11-15 11:03:55.215762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.521 [2024-11-15 11:03:55.215804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:48.521 [2024-11-15 11:03:55.215837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.521 [2024-11-15 11:03:55.216080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.521 [2024-11-15 11:03:55.216134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:48.521 [2024-11-15 11:03:55.216224] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:48.521 [2024-11-15 11:03:55.216263] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:48.521 [2024-11-15 11:03:55.216317] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:48.521 [2024-11-15 11:03:55.216387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:48.521 [2024-11-15 11:03:55.232467] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:48.521 spare 00:18:48.521 11:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.521 11:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:48.521 [2024-11-15 11:03:55.234429] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:49.458 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:49.458 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.458 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:49.458 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:49.458 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.458 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.458 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.458 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.458 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.458 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.458 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.458 "name": "raid_bdev1", 00:18:49.458 "uuid": "52151c7f-b7e6-455e-9ad3-bada312a11f2", 00:18:49.458 "strip_size_kb": 0, 00:18:49.458 "state": "online", 00:18:49.458 "raid_level": "raid1", 00:18:49.458 "superblock": true, 00:18:49.458 "num_base_bdevs": 2, 00:18:49.458 "num_base_bdevs_discovered": 2, 00:18:49.458 "num_base_bdevs_operational": 2, 00:18:49.458 "process": { 00:18:49.458 "type": "rebuild", 00:18:49.458 "target": "spare", 00:18:49.458 "progress": { 00:18:49.458 "blocks": 2560, 00:18:49.458 "percent": 32 00:18:49.458 } 00:18:49.458 }, 00:18:49.458 "base_bdevs_list": [ 00:18:49.458 { 00:18:49.458 "name": "spare", 00:18:49.458 "uuid": "8f85fcec-1fcc-5630-a19c-cb9a775bfd21", 00:18:49.458 "is_configured": true, 00:18:49.458 "data_offset": 256, 00:18:49.458 "data_size": 7936 00:18:49.458 }, 00:18:49.458 { 00:18:49.458 "name": "BaseBdev2", 00:18:49.458 "uuid": "f779ee58-cc41-5fb8-b980-492daacf1c4a", 00:18:49.458 "is_configured": true, 00:18:49.458 "data_offset": 256, 00:18:49.458 "data_size": 7936 00:18:49.458 } 00:18:49.458 ] 00:18:49.458 }' 00:18:49.458 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.458 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:49.458 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.458 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:49.458 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:49.458 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.458 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.719 [2024-11-15 11:03:56.389831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:49.719 [2024-11-15 11:03:56.440157] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:49.719 [2024-11-15 11:03:56.440314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.719 [2024-11-15 11:03:56.440374] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:49.719 [2024-11-15 11:03:56.440398] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:49.719 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.719 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:49.719 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.719 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.719 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.719 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.719 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:49.719 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.719 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.719 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.719 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.719 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.719 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.719 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.719 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.719 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.719 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.719 "name": "raid_bdev1", 00:18:49.719 "uuid": "52151c7f-b7e6-455e-9ad3-bada312a11f2", 00:18:49.719 "strip_size_kb": 0, 00:18:49.719 "state": "online", 00:18:49.719 "raid_level": "raid1", 00:18:49.719 "superblock": true, 00:18:49.719 "num_base_bdevs": 2, 00:18:49.719 "num_base_bdevs_discovered": 1, 00:18:49.719 "num_base_bdevs_operational": 1, 00:18:49.719 "base_bdevs_list": [ 00:18:49.719 { 00:18:49.719 "name": null, 00:18:49.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.719 "is_configured": false, 00:18:49.719 "data_offset": 0, 00:18:49.719 "data_size": 7936 00:18:49.719 }, 00:18:49.719 { 00:18:49.719 "name": "BaseBdev2", 00:18:49.719 "uuid": "f779ee58-cc41-5fb8-b980-492daacf1c4a", 00:18:49.719 "is_configured": true, 00:18:49.719 "data_offset": 256, 00:18:49.719 "data_size": 7936 00:18:49.719 } 00:18:49.719 ] 00:18:49.719 }' 00:18:49.719 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.719 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.979 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:49.979 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.979 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:49.979 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:49.979 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.979 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.979 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.979 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.979 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.239 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.239 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.239 "name": "raid_bdev1", 00:18:50.239 "uuid": "52151c7f-b7e6-455e-9ad3-bada312a11f2", 00:18:50.239 "strip_size_kb": 0, 00:18:50.239 "state": "online", 00:18:50.239 "raid_level": "raid1", 00:18:50.239 "superblock": true, 00:18:50.240 "num_base_bdevs": 2, 00:18:50.240 "num_base_bdevs_discovered": 1, 00:18:50.240 "num_base_bdevs_operational": 1, 00:18:50.240 "base_bdevs_list": [ 00:18:50.240 { 00:18:50.240 "name": null, 00:18:50.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.240 "is_configured": false, 00:18:50.240 "data_offset": 0, 00:18:50.240 "data_size": 7936 00:18:50.240 }, 00:18:50.240 { 00:18:50.240 "name": "BaseBdev2", 00:18:50.240 "uuid": "f779ee58-cc41-5fb8-b980-492daacf1c4a", 00:18:50.240 "is_configured": true, 00:18:50.240 "data_offset": 256, 00:18:50.240 "data_size": 7936 00:18:50.240 } 00:18:50.240 ] 00:18:50.240 }' 00:18:50.240 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.240 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:50.240 11:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.240 11:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:50.240 11:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:50.240 11:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.240 11:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.240 11:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.240 11:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:50.240 11:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.240 11:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.240 [2024-11-15 11:03:57.059440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:50.240 [2024-11-15 11:03:57.059546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.240 [2024-11-15 11:03:57.059588] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:50.240 [2024-11-15 11:03:57.059622] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.240 [2024-11-15 11:03:57.059804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.240 [2024-11-15 11:03:57.059850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:50.240 [2024-11-15 11:03:57.059929] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:50.240 [2024-11-15 11:03:57.059966] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:50.240 [2024-11-15 11:03:57.060006] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:50.240 [2024-11-15 11:03:57.060058] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:50.240 BaseBdev1 00:18:50.240 11:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.240 11:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:51.177 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:51.177 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:51.177 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.177 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:51.177 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:51.177 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:51.177 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.177 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.177 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.177 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.177 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.177 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.177 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.177 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.177 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.435 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.435 "name": "raid_bdev1", 00:18:51.435 "uuid": "52151c7f-b7e6-455e-9ad3-bada312a11f2", 00:18:51.435 "strip_size_kb": 0, 00:18:51.435 "state": "online", 00:18:51.435 "raid_level": "raid1", 00:18:51.435 "superblock": true, 00:18:51.435 "num_base_bdevs": 2, 00:18:51.435 "num_base_bdevs_discovered": 1, 00:18:51.435 "num_base_bdevs_operational": 1, 00:18:51.435 "base_bdevs_list": [ 00:18:51.435 { 00:18:51.435 "name": null, 00:18:51.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.435 "is_configured": false, 00:18:51.435 "data_offset": 0, 00:18:51.435 "data_size": 7936 00:18:51.435 }, 00:18:51.435 { 00:18:51.435 "name": "BaseBdev2", 00:18:51.435 "uuid": "f779ee58-cc41-5fb8-b980-492daacf1c4a", 00:18:51.435 "is_configured": true, 00:18:51.435 "data_offset": 256, 00:18:51.435 "data_size": 7936 00:18:51.435 } 00:18:51.435 ] 00:18:51.435 }' 00:18:51.436 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.436 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.694 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:51.694 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.694 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:51.694 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:51.694 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.694 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.694 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.694 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.694 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.694 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.694 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.694 "name": "raid_bdev1", 00:18:51.694 "uuid": "52151c7f-b7e6-455e-9ad3-bada312a11f2", 00:18:51.694 "strip_size_kb": 0, 00:18:51.694 "state": "online", 00:18:51.694 "raid_level": "raid1", 00:18:51.694 "superblock": true, 00:18:51.694 "num_base_bdevs": 2, 00:18:51.694 "num_base_bdevs_discovered": 1, 00:18:51.694 "num_base_bdevs_operational": 1, 00:18:51.694 "base_bdevs_list": [ 00:18:51.694 { 00:18:51.694 "name": null, 00:18:51.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.694 "is_configured": false, 00:18:51.694 "data_offset": 0, 00:18:51.694 "data_size": 7936 00:18:51.694 }, 00:18:51.694 { 00:18:51.694 "name": "BaseBdev2", 00:18:51.694 "uuid": "f779ee58-cc41-5fb8-b980-492daacf1c4a", 00:18:51.694 "is_configured": true, 00:18:51.694 "data_offset": 256, 00:18:51.694 "data_size": 7936 00:18:51.694 } 00:18:51.694 ] 00:18:51.694 }' 00:18:51.694 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.952 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:51.952 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.952 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:51.952 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:51.952 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:18:51.952 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:51.953 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:51.953 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:51.953 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:51.953 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:51.953 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:51.953 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.953 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.953 [2024-11-15 11:03:58.684735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:51.953 [2024-11-15 11:03:58.684953] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:51.953 [2024-11-15 11:03:58.685016] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:51.953 request: 00:18:51.953 { 00:18:51.953 "base_bdev": "BaseBdev1", 00:18:51.953 "raid_bdev": "raid_bdev1", 00:18:51.953 "method": "bdev_raid_add_base_bdev", 00:18:51.953 "req_id": 1 00:18:51.953 } 00:18:51.953 Got JSON-RPC error response 00:18:51.953 response: 00:18:51.953 { 00:18:51.953 "code": -22, 00:18:51.953 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:51.953 } 00:18:51.953 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:51.953 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:18:51.953 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:51.953 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:51.953 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:51.953 11:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:52.885 11:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:52.885 11:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:52.885 11:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:52.885 11:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:52.885 11:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:52.885 11:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:52.885 11:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.885 11:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.885 11:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.885 11:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.885 11:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.885 11:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.885 11:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.885 11:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.885 11:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.885 11:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.885 "name": "raid_bdev1", 00:18:52.885 "uuid": "52151c7f-b7e6-455e-9ad3-bada312a11f2", 00:18:52.885 "strip_size_kb": 0, 00:18:52.885 "state": "online", 00:18:52.885 "raid_level": "raid1", 00:18:52.885 "superblock": true, 00:18:52.885 "num_base_bdevs": 2, 00:18:52.885 "num_base_bdevs_discovered": 1, 00:18:52.885 "num_base_bdevs_operational": 1, 00:18:52.885 "base_bdevs_list": [ 00:18:52.885 { 00:18:52.885 "name": null, 00:18:52.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.885 "is_configured": false, 00:18:52.885 "data_offset": 0, 00:18:52.885 "data_size": 7936 00:18:52.885 }, 00:18:52.885 { 00:18:52.885 "name": "BaseBdev2", 00:18:52.885 "uuid": "f779ee58-cc41-5fb8-b980-492daacf1c4a", 00:18:52.885 "is_configured": true, 00:18:52.885 "data_offset": 256, 00:18:52.885 "data_size": 7936 00:18:52.885 } 00:18:52.885 ] 00:18:52.885 }' 00:18:52.885 11:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.885 11:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.451 11:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:53.451 11:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.451 11:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:53.451 11:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:53.451 11:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.451 11:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.451 11:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.451 11:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.451 11:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.451 11:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.451 11:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.451 "name": "raid_bdev1", 00:18:53.451 "uuid": "52151c7f-b7e6-455e-9ad3-bada312a11f2", 00:18:53.451 "strip_size_kb": 0, 00:18:53.451 "state": "online", 00:18:53.451 "raid_level": "raid1", 00:18:53.451 "superblock": true, 00:18:53.451 "num_base_bdevs": 2, 00:18:53.451 "num_base_bdevs_discovered": 1, 00:18:53.451 "num_base_bdevs_operational": 1, 00:18:53.451 "base_bdevs_list": [ 00:18:53.451 { 00:18:53.451 "name": null, 00:18:53.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.451 "is_configured": false, 00:18:53.451 "data_offset": 0, 00:18:53.451 "data_size": 7936 00:18:53.451 }, 00:18:53.451 { 00:18:53.451 "name": "BaseBdev2", 00:18:53.451 "uuid": "f779ee58-cc41-5fb8-b980-492daacf1c4a", 00:18:53.451 "is_configured": true, 00:18:53.451 "data_offset": 256, 00:18:53.451 "data_size": 7936 00:18:53.451 } 00:18:53.451 ] 00:18:53.451 }' 00:18:53.451 11:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.451 11:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:53.451 11:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.451 11:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:53.451 11:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89221 00:18:53.451 11:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 89221 ']' 00:18:53.451 11:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 89221 00:18:53.451 11:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:18:53.451 11:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:53.451 11:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89221 00:18:53.451 killing process with pid 89221 00:18:53.451 Received shutdown signal, test time was about 60.000000 seconds 00:18:53.451 00:18:53.451 Latency(us) 00:18:53.451 [2024-11-15T11:04:00.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.451 [2024-11-15T11:04:00.379Z] =================================================================================================================== 00:18:53.451 [2024-11-15T11:04:00.379Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:53.451 11:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:53.451 11:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:53.451 11:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89221' 00:18:53.451 11:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 89221 00:18:53.451 [2024-11-15 11:04:00.340292] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:53.451 [2024-11-15 11:04:00.340455] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:53.451 11:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 89221 00:18:53.451 [2024-11-15 11:04:00.340506] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:53.451 [2024-11-15 11:04:00.340519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:54.023 [2024-11-15 11:04:00.643846] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:54.969 11:04:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:18:54.969 00:18:54.969 real 0m17.726s 00:18:54.969 user 0m23.307s 00:18:54.969 sys 0m1.702s 00:18:54.969 11:04:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:54.969 ************************************ 00:18:54.969 END TEST raid_rebuild_test_sb_md_interleaved 00:18:54.969 ************************************ 00:18:54.969 11:04:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.969 11:04:01 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:18:54.969 11:04:01 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:18:54.969 11:04:01 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89221 ']' 00:18:54.969 11:04:01 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89221 00:18:54.969 11:04:01 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:18:54.969 00:18:54.969 real 12m10.041s 00:18:54.969 user 16m28.460s 00:18:54.969 sys 1m53.849s 00:18:54.969 11:04:01 bdev_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:54.969 ************************************ 00:18:54.969 END TEST bdev_raid 00:18:54.969 ************************************ 00:18:54.969 11:04:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:55.228 11:04:01 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:55.228 11:04:01 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:18:55.228 11:04:01 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:55.228 11:04:01 -- common/autotest_common.sh@10 -- # set +x 00:18:55.228 ************************************ 00:18:55.228 START TEST spdkcli_raid 00:18:55.228 ************************************ 00:18:55.228 11:04:01 spdkcli_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:55.228 * Looking for test storage... 00:18:55.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:55.228 11:04:02 spdkcli_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:55.228 11:04:02 spdkcli_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:18:55.228 11:04:02 spdkcli_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:55.228 11:04:02 spdkcli_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:55.228 11:04:02 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:55.228 11:04:02 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:55.228 11:04:02 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:55.228 11:04:02 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:55.228 11:04:02 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:55.228 11:04:02 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:55.228 11:04:02 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:55.228 11:04:02 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:55.228 11:04:02 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:55.228 11:04:02 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:55.228 11:04:02 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:55.228 11:04:02 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:55.228 11:04:02 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:18:55.228 11:04:02 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:55.229 11:04:02 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.229 11:04:02 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:18:55.229 11:04:02 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:18:55.229 11:04:02 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:55.229 11:04:02 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:18:55.229 11:04:02 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:55.229 11:04:02 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:18:55.229 11:04:02 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:18:55.229 11:04:02 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:55.229 11:04:02 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:18:55.229 11:04:02 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:55.229 11:04:02 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:55.229 11:04:02 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:55.229 11:04:02 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:18:55.229 11:04:02 spdkcli_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:55.229 11:04:02 spdkcli_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:55.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.229 --rc genhtml_branch_coverage=1 00:18:55.229 --rc genhtml_function_coverage=1 00:18:55.229 --rc genhtml_legend=1 00:18:55.229 --rc geninfo_all_blocks=1 00:18:55.229 --rc geninfo_unexecuted_blocks=1 00:18:55.229 00:18:55.229 ' 00:18:55.229 11:04:02 spdkcli_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:55.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.229 --rc genhtml_branch_coverage=1 00:18:55.229 --rc genhtml_function_coverage=1 00:18:55.229 --rc genhtml_legend=1 00:18:55.229 --rc geninfo_all_blocks=1 00:18:55.229 --rc geninfo_unexecuted_blocks=1 00:18:55.229 00:18:55.229 ' 00:18:55.229 11:04:02 spdkcli_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:55.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.229 --rc genhtml_branch_coverage=1 00:18:55.229 --rc genhtml_function_coverage=1 00:18:55.229 --rc genhtml_legend=1 00:18:55.229 --rc geninfo_all_blocks=1 00:18:55.229 --rc geninfo_unexecuted_blocks=1 00:18:55.229 00:18:55.229 ' 00:18:55.229 11:04:02 spdkcli_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:55.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.229 --rc genhtml_branch_coverage=1 00:18:55.229 --rc genhtml_function_coverage=1 00:18:55.229 --rc genhtml_legend=1 00:18:55.229 --rc geninfo_all_blocks=1 00:18:55.229 --rc geninfo_unexecuted_blocks=1 00:18:55.229 00:18:55.229 ' 00:18:55.229 11:04:02 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:55.229 11:04:02 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:55.229 11:04:02 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:55.229 11:04:02 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:55.229 11:04:02 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:55.229 11:04:02 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:55.229 11:04:02 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:55.229 11:04:02 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:55.229 11:04:02 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:55.229 11:04:02 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:55.229 11:04:02 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:55.229 11:04:02 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:55.229 11:04:02 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:55.229 11:04:02 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:55.229 11:04:02 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:55.229 11:04:02 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:55.229 11:04:02 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:55.229 11:04:02 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:55.229 11:04:02 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:55.229 11:04:02 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:55.229 11:04:02 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:55.229 11:04:02 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:55.229 11:04:02 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:55.229 11:04:02 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:18:55.229 11:04:02 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:18:55.229 11:04:02 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:55.488 11:04:02 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:55.488 11:04:02 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:55.488 11:04:02 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:55.488 11:04:02 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:55.488 11:04:02 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:55.488 11:04:02 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:18:55.488 11:04:02 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:18:55.488 11:04:02 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:55.488 11:04:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:55.488 11:04:02 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:18:55.488 11:04:02 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89903 00:18:55.488 11:04:02 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:55.488 11:04:02 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89903 00:18:55.488 11:04:02 spdkcli_raid -- common/autotest_common.sh@833 -- # '[' -z 89903 ']' 00:18:55.488 11:04:02 spdkcli_raid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.488 11:04:02 spdkcli_raid -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:55.488 11:04:02 spdkcli_raid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.488 11:04:02 spdkcli_raid -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:55.488 11:04:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:55.488 [2024-11-15 11:04:02.273331] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:18:55.488 [2024-11-15 11:04:02.273537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89903 ] 00:18:55.747 [2024-11-15 11:04:02.447159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:55.747 [2024-11-15 11:04:02.569147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.747 [2024-11-15 11:04:02.569190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.695 11:04:03 spdkcli_raid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:56.695 11:04:03 spdkcli_raid -- common/autotest_common.sh@866 -- # return 0 00:18:56.695 11:04:03 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:18:56.695 11:04:03 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:56.695 11:04:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:56.695 11:04:03 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:18:56.695 11:04:03 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:56.695 11:04:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:56.695 11:04:03 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:18:56.695 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:18:56.695 ' 00:18:58.617 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:18:58.617 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:18:58.617 11:04:05 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:18:58.617 11:04:05 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:58.617 11:04:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:58.617 11:04:05 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:18:58.617 11:04:05 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:58.617 11:04:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:58.617 11:04:05 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:18:58.617 ' 00:18:59.577 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:18:59.577 11:04:06 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:18:59.577 11:04:06 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:59.577 11:04:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:59.577 11:04:06 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:18:59.577 11:04:06 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:59.577 11:04:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:59.577 11:04:06 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:18:59.577 11:04:06 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:19:00.144 11:04:06 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:19:00.144 11:04:07 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:19:00.144 11:04:07 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:19:00.144 11:04:07 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:00.144 11:04:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:00.403 11:04:07 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:19:00.403 11:04:07 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:00.403 11:04:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:00.403 11:04:07 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:19:00.403 ' 00:19:01.339 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:19:01.339 11:04:08 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:19:01.339 11:04:08 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:01.339 11:04:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:01.339 11:04:08 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:19:01.339 11:04:08 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:01.339 11:04:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:01.339 11:04:08 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:19:01.339 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:19:01.339 ' 00:19:03.239 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:19:03.239 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:19:03.239 11:04:09 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:19:03.239 11:04:09 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:03.239 11:04:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:03.239 11:04:09 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89903 00:19:03.239 11:04:09 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 89903 ']' 00:19:03.239 11:04:09 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 89903 00:19:03.239 11:04:09 spdkcli_raid -- common/autotest_common.sh@957 -- # uname 00:19:03.239 11:04:09 spdkcli_raid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:03.239 11:04:09 spdkcli_raid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89903 00:19:03.239 11:04:09 spdkcli_raid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:03.239 11:04:09 spdkcli_raid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:03.239 11:04:09 spdkcli_raid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89903' 00:19:03.239 killing process with pid 89903 00:19:03.239 11:04:09 spdkcli_raid -- common/autotest_common.sh@971 -- # kill 89903 00:19:03.239 11:04:09 spdkcli_raid -- common/autotest_common.sh@976 -- # wait 89903 00:19:05.773 11:04:12 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:19:05.773 11:04:12 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89903 ']' 00:19:05.773 11:04:12 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89903 00:19:05.773 11:04:12 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 89903 ']' 00:19:05.773 11:04:12 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 89903 00:19:05.773 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (89903) - No such process 00:19:05.773 11:04:12 spdkcli_raid -- common/autotest_common.sh@979 -- # echo 'Process with pid 89903 is not found' 00:19:05.773 Process with pid 89903 is not found 00:19:05.773 11:04:12 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:19:05.773 11:04:12 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:19:05.773 11:04:12 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:19:05.773 11:04:12 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:19:05.773 00:19:05.773 real 0m10.529s 00:19:05.773 user 0m21.835s 00:19:05.773 sys 0m1.179s 00:19:05.773 11:04:12 spdkcli_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:05.773 11:04:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:05.773 ************************************ 00:19:05.773 END TEST spdkcli_raid 00:19:05.773 ************************************ 00:19:05.773 11:04:12 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:05.773 11:04:12 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:05.773 11:04:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:05.773 11:04:12 -- common/autotest_common.sh@10 -- # set +x 00:19:05.773 ************************************ 00:19:05.773 START TEST blockdev_raid5f 00:19:05.773 ************************************ 00:19:05.773 11:04:12 blockdev_raid5f -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:05.773 * Looking for test storage... 00:19:05.773 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:05.774 11:04:12 blockdev_raid5f -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:05.774 11:04:12 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lcov --version 00:19:05.774 11:04:12 blockdev_raid5f -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:06.033 11:04:12 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:06.033 11:04:12 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:06.033 11:04:12 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:06.033 11:04:12 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:06.033 11:04:12 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:19:06.033 11:04:12 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:19:06.033 11:04:12 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:19:06.033 11:04:12 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:19:06.033 11:04:12 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:19:06.033 11:04:12 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:19:06.033 11:04:12 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:19:06.033 11:04:12 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:06.033 11:04:12 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:19:06.033 11:04:12 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:19:06.033 11:04:12 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:06.033 11:04:12 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:06.033 11:04:12 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:19:06.033 11:04:12 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:19:06.033 11:04:12 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:06.033 11:04:12 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:19:06.033 11:04:12 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:19:06.033 11:04:12 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:19:06.033 11:04:12 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:19:06.033 11:04:12 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:06.033 11:04:12 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:19:06.033 11:04:12 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:19:06.033 11:04:12 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:06.033 11:04:12 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:06.033 11:04:12 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:19:06.033 11:04:12 blockdev_raid5f -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:06.033 11:04:12 blockdev_raid5f -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:06.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.033 --rc genhtml_branch_coverage=1 00:19:06.033 --rc genhtml_function_coverage=1 00:19:06.033 --rc genhtml_legend=1 00:19:06.033 --rc geninfo_all_blocks=1 00:19:06.033 --rc geninfo_unexecuted_blocks=1 00:19:06.033 00:19:06.033 ' 00:19:06.033 11:04:12 blockdev_raid5f -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:06.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.033 --rc genhtml_branch_coverage=1 00:19:06.033 --rc genhtml_function_coverage=1 00:19:06.033 --rc genhtml_legend=1 00:19:06.033 --rc geninfo_all_blocks=1 00:19:06.033 --rc geninfo_unexecuted_blocks=1 00:19:06.033 00:19:06.033 ' 00:19:06.033 11:04:12 blockdev_raid5f -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:06.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.033 --rc genhtml_branch_coverage=1 00:19:06.033 --rc genhtml_function_coverage=1 00:19:06.033 --rc genhtml_legend=1 00:19:06.033 --rc geninfo_all_blocks=1 00:19:06.033 --rc geninfo_unexecuted_blocks=1 00:19:06.033 00:19:06.033 ' 00:19:06.033 11:04:12 blockdev_raid5f -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:06.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.033 --rc genhtml_branch_coverage=1 00:19:06.033 --rc genhtml_function_coverage=1 00:19:06.033 --rc genhtml_legend=1 00:19:06.033 --rc geninfo_all_blocks=1 00:19:06.033 --rc geninfo_unexecuted_blocks=1 00:19:06.033 00:19:06.033 ' 00:19:06.033 11:04:12 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:06.033 11:04:12 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:19:06.033 11:04:12 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:06.033 11:04:12 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:06.033 11:04:12 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:06.033 11:04:12 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:06.033 11:04:12 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:06.033 11:04:12 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:06.033 11:04:12 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:19:06.033 11:04:12 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:19:06.033 11:04:12 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:19:06.033 11:04:12 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:19:06.033 11:04:12 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:19:06.033 11:04:12 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:19:06.033 11:04:12 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:19:06.033 11:04:12 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:19:06.033 11:04:12 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:19:06.033 11:04:12 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:19:06.033 11:04:12 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:19:06.033 11:04:12 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:19:06.033 11:04:12 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:19:06.033 11:04:12 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:19:06.033 11:04:12 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:19:06.033 11:04:12 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:19:06.033 11:04:12 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90185 00:19:06.033 11:04:12 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:06.033 11:04:12 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:06.033 11:04:12 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90185 00:19:06.033 11:04:12 blockdev_raid5f -- common/autotest_common.sh@833 -- # '[' -z 90185 ']' 00:19:06.033 11:04:12 blockdev_raid5f -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.033 11:04:12 blockdev_raid5f -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:06.033 11:04:12 blockdev_raid5f -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.033 11:04:12 blockdev_raid5f -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:06.033 11:04:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:06.033 [2024-11-15 11:04:12.867244] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:19:06.033 [2024-11-15 11:04:12.867480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90185 ] 00:19:06.292 [2024-11-15 11:04:13.031994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.292 [2024-11-15 11:04:13.155432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.229 11:04:14 blockdev_raid5f -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:07.229 11:04:14 blockdev_raid5f -- common/autotest_common.sh@866 -- # return 0 00:19:07.229 11:04:14 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:19:07.229 11:04:14 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:19:07.229 11:04:14 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:19:07.229 11:04:14 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.229 11:04:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:07.229 Malloc0 00:19:07.491 Malloc1 00:19:07.491 Malloc2 00:19:07.491 11:04:14 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.491 11:04:14 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:19:07.491 11:04:14 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.491 11:04:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:07.491 11:04:14 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.491 11:04:14 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:19:07.491 11:04:14 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:19:07.491 11:04:14 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.491 11:04:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:07.491 11:04:14 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.491 11:04:14 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:19:07.491 11:04:14 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.491 11:04:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:07.491 11:04:14 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.491 11:04:14 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:07.491 11:04:14 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.491 11:04:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:07.491 11:04:14 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.491 11:04:14 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:19:07.491 11:04:14 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:19:07.491 11:04:14 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.491 11:04:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:07.491 11:04:14 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:19:07.491 11:04:14 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.491 11:04:14 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:19:07.491 11:04:14 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:19:07.492 11:04:14 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "de31904d-3bff-4843-b18a-dc6121a72a24"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "de31904d-3bff-4843-b18a-dc6121a72a24",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "de31904d-3bff-4843-b18a-dc6121a72a24",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "805082a5-044b-4802-8e7d-0a38ecb1ac63",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "4dfdf2ce-6a6c-43d8-a138-27e2e4ef1d71",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "85113a7c-a244-4f3c-818d-ac43b0f7e021",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:07.492 11:04:14 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:19:07.492 11:04:14 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:19:07.492 11:04:14 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:19:07.492 11:04:14 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90185 00:19:07.492 11:04:14 blockdev_raid5f -- common/autotest_common.sh@952 -- # '[' -z 90185 ']' 00:19:07.492 11:04:14 blockdev_raid5f -- common/autotest_common.sh@956 -- # kill -0 90185 00:19:07.492 11:04:14 blockdev_raid5f -- common/autotest_common.sh@957 -- # uname 00:19:07.778 11:04:14 blockdev_raid5f -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:07.778 11:04:14 blockdev_raid5f -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90185 00:19:07.778 11:04:14 blockdev_raid5f -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:07.778 11:04:14 blockdev_raid5f -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:07.778 11:04:14 blockdev_raid5f -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90185' 00:19:07.778 killing process with pid 90185 00:19:07.778 11:04:14 blockdev_raid5f -- common/autotest_common.sh@971 -- # kill 90185 00:19:07.778 11:04:14 blockdev_raid5f -- common/autotest_common.sh@976 -- # wait 90185 00:19:10.314 11:04:17 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:10.314 11:04:17 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:10.314 11:04:17 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:19:10.314 11:04:17 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:10.314 11:04:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:10.314 ************************************ 00:19:10.314 START TEST bdev_hello_world 00:19:10.314 ************************************ 00:19:10.314 11:04:17 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:10.314 [2024-11-15 11:04:17.230881] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:19:10.314 [2024-11-15 11:04:17.231080] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90251 ] 00:19:10.573 [2024-11-15 11:04:17.407251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.832 [2024-11-15 11:04:17.527248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.401 [2024-11-15 11:04:18.061762] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:11.401 [2024-11-15 11:04:18.061892] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:19:11.401 [2024-11-15 11:04:18.061915] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:11.401 [2024-11-15 11:04:18.062452] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:11.401 [2024-11-15 11:04:18.062623] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:11.401 [2024-11-15 11:04:18.062647] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:11.401 [2024-11-15 11:04:18.062705] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:11.401 00:19:11.401 [2024-11-15 11:04:18.062726] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:12.806 00:19:12.806 real 0m2.344s 00:19:12.806 user 0m1.973s 00:19:12.806 sys 0m0.244s 00:19:12.806 11:04:19 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:12.806 11:04:19 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:12.806 ************************************ 00:19:12.806 END TEST bdev_hello_world 00:19:12.806 ************************************ 00:19:12.806 11:04:19 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:19:12.806 11:04:19 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:12.806 11:04:19 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:12.806 11:04:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:12.806 ************************************ 00:19:12.806 START TEST bdev_bounds 00:19:12.806 ************************************ 00:19:12.806 11:04:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:19:12.806 11:04:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90299 00:19:12.806 11:04:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:12.806 11:04:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:12.806 11:04:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90299' 00:19:12.806 Process bdevio pid: 90299 00:19:12.806 11:04:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90299 00:19:12.806 11:04:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 90299 ']' 00:19:12.806 11:04:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.806 11:04:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:12.806 11:04:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.806 11:04:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:12.806 11:04:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:12.806 [2024-11-15 11:04:19.634720] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:19:12.806 [2024-11-15 11:04:19.634927] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90299 ] 00:19:13.065 [2024-11-15 11:04:19.807489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:13.065 [2024-11-15 11:04:19.920807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.065 [2024-11-15 11:04:19.920945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.065 [2024-11-15 11:04:19.920984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.632 11:04:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:13.632 11:04:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:19:13.632 11:04:20 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:13.890 I/O targets: 00:19:13.890 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:13.890 00:19:13.890 00:19:13.890 CUnit - A unit testing framework for C - Version 2.1-3 00:19:13.890 http://cunit.sourceforge.net/ 00:19:13.890 00:19:13.890 00:19:13.890 Suite: bdevio tests on: raid5f 00:19:13.890 Test: blockdev write read block ...passed 00:19:13.890 Test: blockdev write zeroes read block ...passed 00:19:13.890 Test: blockdev write zeroes read no split ...passed 00:19:13.890 Test: blockdev write zeroes read split ...passed 00:19:14.149 Test: blockdev write zeroes read split partial ...passed 00:19:14.149 Test: blockdev reset ...passed 00:19:14.149 Test: blockdev write read 8 blocks ...passed 00:19:14.149 Test: blockdev write read size > 128k ...passed 00:19:14.149 Test: blockdev write read invalid size ...passed 00:19:14.149 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:14.149 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:14.149 Test: blockdev write read max offset ...passed 00:19:14.149 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:14.149 Test: blockdev writev readv 8 blocks ...passed 00:19:14.149 Test: blockdev writev readv 30 x 1block ...passed 00:19:14.149 Test: blockdev writev readv block ...passed 00:19:14.149 Test: blockdev writev readv size > 128k ...passed 00:19:14.149 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:14.149 Test: blockdev comparev and writev ...passed 00:19:14.150 Test: blockdev nvme passthru rw ...passed 00:19:14.150 Test: blockdev nvme passthru vendor specific ...passed 00:19:14.150 Test: blockdev nvme admin passthru ...passed 00:19:14.150 Test: blockdev copy ...passed 00:19:14.150 00:19:14.150 Run Summary: Type Total Ran Passed Failed Inactive 00:19:14.150 suites 1 1 n/a 0 0 00:19:14.150 tests 23 23 23 0 0 00:19:14.150 asserts 130 130 130 0 n/a 00:19:14.150 00:19:14.150 Elapsed time = 0.642 seconds 00:19:14.150 0 00:19:14.150 11:04:20 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90299 00:19:14.150 11:04:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 90299 ']' 00:19:14.150 11:04:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 90299 00:19:14.150 11:04:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:19:14.150 11:04:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:14.150 11:04:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90299 00:19:14.150 11:04:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:14.150 11:04:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:14.150 11:04:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90299' 00:19:14.150 killing process with pid 90299 00:19:14.150 11:04:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@971 -- # kill 90299 00:19:14.150 11:04:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@976 -- # wait 90299 00:19:15.526 11:04:22 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:15.526 00:19:15.526 real 0m2.748s 00:19:15.526 user 0m6.885s 00:19:15.526 sys 0m0.355s 00:19:15.526 11:04:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:15.526 11:04:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:15.526 ************************************ 00:19:15.526 END TEST bdev_bounds 00:19:15.526 ************************************ 00:19:15.526 11:04:22 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:15.526 11:04:22 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:15.526 11:04:22 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:15.526 11:04:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:15.526 ************************************ 00:19:15.526 START TEST bdev_nbd 00:19:15.526 ************************************ 00:19:15.526 11:04:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:15.526 11:04:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:15.526 11:04:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:15.526 11:04:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:15.526 11:04:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:15.526 11:04:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:15.526 11:04:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:15.526 11:04:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:15.526 11:04:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:15.526 11:04:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:15.526 11:04:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:15.526 11:04:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:15.526 11:04:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:15.526 11:04:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:15.526 11:04:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:15.526 11:04:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:15.526 11:04:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90360 00:19:15.526 11:04:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:15.526 11:04:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:15.526 11:04:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90360 /var/tmp/spdk-nbd.sock 00:19:15.526 11:04:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 90360 ']' 00:19:15.526 11:04:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:15.526 11:04:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:15.526 11:04:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:15.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:15.526 11:04:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:15.526 11:04:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:15.785 [2024-11-15 11:04:22.452402] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:19:15.785 [2024-11-15 11:04:22.452529] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:15.785 [2024-11-15 11:04:22.628715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.043 [2024-11-15 11:04:22.742354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.610 11:04:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:16.610 11:04:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:19:16.610 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:16.610 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:16.610 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:16.610 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:16.610 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:16.610 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:16.610 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:16.610 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:16.610 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:16.610 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:16.610 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:16.610 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:16.610 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:16.869 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:16.869 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:16.869 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:16.869 11:04:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:16.869 11:04:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:19:16.870 11:04:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:16.870 11:04:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:16.870 11:04:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:16.870 11:04:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:19:16.870 11:04:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:16.870 11:04:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:16.870 11:04:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:16.870 1+0 records in 00:19:16.870 1+0 records out 00:19:16.870 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392969 s, 10.4 MB/s 00:19:16.870 11:04:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:16.870 11:04:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:19:16.870 11:04:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:16.870 11:04:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:16.870 11:04:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:19:16.870 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:16.870 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:16.870 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:16.870 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:16.870 { 00:19:16.870 "nbd_device": "/dev/nbd0", 00:19:16.870 "bdev_name": "raid5f" 00:19:16.870 } 00:19:16.870 ]' 00:19:16.870 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:17.129 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:17.129 { 00:19:17.129 "nbd_device": "/dev/nbd0", 00:19:17.129 "bdev_name": "raid5f" 00:19:17.129 } 00:19:17.129 ]' 00:19:17.129 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:17.129 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:17.129 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:17.129 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:17.129 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:17.129 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:17.129 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:17.129 11:04:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:17.129 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:17.129 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:17.129 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:17.129 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:17.129 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:17.129 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:17.129 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:17.129 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:17.388 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:17.388 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:17.388 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:17.388 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:17.388 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:17.388 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:17.646 /dev/nbd0 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:17.646 1+0 records in 00:19:17.646 1+0 records out 00:19:17.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427552 s, 9.6 MB/s 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:19:17.646 11:04:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:17.906 11:04:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:17.906 11:04:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:19:17.906 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:17.906 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:17.906 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:17.906 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:17.907 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:17.907 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:17.907 { 00:19:17.907 "nbd_device": "/dev/nbd0", 00:19:17.907 "bdev_name": "raid5f" 00:19:17.907 } 00:19:17.907 ]' 00:19:17.907 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:17.907 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:17.907 { 00:19:17.907 "nbd_device": "/dev/nbd0", 00:19:17.907 "bdev_name": "raid5f" 00:19:17.907 } 00:19:17.907 ]' 00:19:18.171 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:18.171 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:18.171 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:18.171 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:18.171 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:18.171 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:18.171 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:18.171 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:18.171 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:18.171 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:18.171 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:18.171 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:18.171 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:18.171 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:18.171 256+0 records in 00:19:18.171 256+0 records out 00:19:18.171 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148311 s, 70.7 MB/s 00:19:18.171 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:18.171 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:18.171 256+0 records in 00:19:18.171 256+0 records out 00:19:18.171 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0320486 s, 32.7 MB/s 00:19:18.172 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:18.172 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:18.172 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:18.172 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:18.172 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:18.172 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:18.172 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:18.172 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:18.172 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:18.172 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:18.172 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:18.172 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:18.172 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:18.172 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:18.172 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:18.172 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:18.172 11:04:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:18.432 11:04:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:18.432 11:04:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:18.432 11:04:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:18.432 11:04:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:18.432 11:04:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:18.432 11:04:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:18.432 11:04:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:18.432 11:04:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:18.432 11:04:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:18.432 11:04:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:18.432 11:04:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:18.699 11:04:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:18.699 11:04:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:18.699 11:04:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:18.699 11:04:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:18.699 11:04:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:18.699 11:04:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:18.699 11:04:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:18.699 11:04:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:18.699 11:04:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:18.699 11:04:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:18.699 11:04:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:18.699 11:04:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:18.699 11:04:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:18.699 11:04:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:18.699 11:04:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:18.699 11:04:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:18.957 malloc_lvol_verify 00:19:18.957 11:04:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:18.957 e94c7cef-3fd6-4091-acba-28dd804714a5 00:19:18.957 11:04:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:19.215 883e0eea-5419-4a17-93e0-fa9e4aa14dfa 00:19:19.215 11:04:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:19.474 /dev/nbd0 00:19:19.474 11:04:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:19.474 11:04:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:19.474 11:04:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:19.474 11:04:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:19.474 11:04:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:19.474 mke2fs 1.47.0 (5-Feb-2023) 00:19:19.474 Discarding device blocks: 0/4096 done 00:19:19.474 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:19.474 00:19:19.474 Allocating group tables: 0/1 done 00:19:19.474 Writing inode tables: 0/1 done 00:19:19.474 Creating journal (1024 blocks): done 00:19:19.474 Writing superblocks and filesystem accounting information: 0/1 done 00:19:19.474 00:19:19.474 11:04:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:19.474 11:04:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:19.474 11:04:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:19.474 11:04:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:19.474 11:04:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:19.474 11:04:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:19.474 11:04:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:19.734 11:04:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:19.734 11:04:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:19.734 11:04:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:19.734 11:04:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:19.734 11:04:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:19.734 11:04:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:19.734 11:04:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:19.734 11:04:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:19.734 11:04:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90360 00:19:19.734 11:04:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 90360 ']' 00:19:19.734 11:04:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 90360 00:19:19.734 11:04:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:19:19.734 11:04:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:19.734 11:04:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90360 00:19:19.734 11:04:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:19.734 11:04:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:19.734 killing process with pid 90360 00:19:19.734 11:04:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90360' 00:19:19.734 11:04:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@971 -- # kill 90360 00:19:19.734 11:04:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@976 -- # wait 90360 00:19:21.113 11:04:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:21.113 00:19:21.113 real 0m5.583s 00:19:21.113 user 0m7.608s 00:19:21.113 sys 0m1.297s 00:19:21.113 11:04:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:21.113 11:04:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:21.113 ************************************ 00:19:21.113 END TEST bdev_nbd 00:19:21.113 ************************************ 00:19:21.113 11:04:27 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:19:21.113 11:04:27 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:19:21.113 11:04:27 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:19:21.113 11:04:27 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:19:21.113 11:04:27 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:21.113 11:04:27 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:21.113 11:04:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:21.113 ************************************ 00:19:21.113 START TEST bdev_fio 00:19:21.113 ************************************ 00:19:21.113 11:04:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:19:21.113 11:04:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:21.113 11:04:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:21.113 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:21.113 11:04:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:21.113 11:04:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:21.113 11:04:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:21.113 11:04:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:21.113 11:04:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:21.113 11:04:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:21.113 11:04:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:19:21.113 11:04:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:19:21.113 11:04:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:19:21.113 11:04:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:19:21.113 11:04:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:21.113 11:04:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:19:21.113 11:04:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:19:21.113 11:04:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:21.113 11:04:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:19:21.113 11:04:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:19:21.113 11:04:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:19:21.113 11:04:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:19:21.113 11:04:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:19:21.373 11:04:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:21.373 11:04:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:19:21.373 11:04:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:21.373 11:04:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:21.373 11:04:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:21.373 11:04:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:21.373 11:04:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:21.373 11:04:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:19:21.373 11:04:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:21.373 11:04:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:21.373 ************************************ 00:19:21.373 START TEST bdev_fio_rw_verify 00:19:21.373 ************************************ 00:19:21.373 11:04:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:21.373 11:04:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:21.373 11:04:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:19:21.373 11:04:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:21.373 11:04:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:19:21.373 11:04:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:21.373 11:04:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:19:21.373 11:04:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:19:21.373 11:04:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:19:21.373 11:04:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:21.373 11:04:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:19:21.373 11:04:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:19:21.373 11:04:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:21.373 11:04:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:21.373 11:04:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:19:21.373 11:04:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:21.373 11:04:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:21.632 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:21.632 fio-3.35 00:19:21.632 Starting 1 thread 00:19:33.864 00:19:33.864 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90556: Fri Nov 15 11:04:39 2024 00:19:33.864 read: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(450MiB/10001msec) 00:19:33.864 slat (nsec): min=17732, max=70071, avg=20703.61, stdev=2786.32 00:19:33.864 clat (usec): min=10, max=343, avg=138.53, stdev=50.68 00:19:33.864 lat (usec): min=30, max=369, avg=159.23, stdev=51.29 00:19:33.864 clat percentiles (usec): 00:19:33.864 | 50.000th=[ 137], 99.000th=[ 249], 99.900th=[ 285], 99.990th=[ 310], 00:19:33.864 | 99.999th=[ 330] 00:19:33.864 write: IOPS=12.1k, BW=47.1MiB/s (49.4MB/s)(465MiB/9868msec); 0 zone resets 00:19:33.864 slat (usec): min=7, max=230, avg=17.71, stdev= 3.88 00:19:33.864 clat (usec): min=56, max=1020, avg=317.20, stdev=49.58 00:19:33.864 lat (usec): min=72, max=1178, avg=334.91, stdev=51.02 00:19:33.864 clat percentiles (usec): 00:19:33.864 | 50.000th=[ 318], 99.000th=[ 445], 99.900th=[ 553], 99.990th=[ 914], 00:19:33.864 | 99.999th=[ 979] 00:19:33.864 bw ( KiB/s): min=43536, max=50696, per=99.27%, avg=47929.42, stdev=2115.42, samples=19 00:19:33.864 iops : min=10884, max=12674, avg=11982.32, stdev=528.81, samples=19 00:19:33.864 lat (usec) : 20=0.01%, 50=0.01%, 100=13.86%, 250=39.07%, 500=47.00% 00:19:33.864 lat (usec) : 750=0.06%, 1000=0.02% 00:19:33.864 lat (msec) : 2=0.01% 00:19:33.864 cpu : usr=98.88%, sys=0.54%, ctx=33, majf=0, minf=9514 00:19:33.864 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:33.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.864 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.864 issued rwts: total=115211,119111,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:33.864 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:33.864 00:19:33.864 Run status group 0 (all jobs): 00:19:33.864 READ: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=450MiB (472MB), run=10001-10001msec 00:19:33.864 WRITE: bw=47.1MiB/s (49.4MB/s), 47.1MiB/s-47.1MiB/s (49.4MB/s-49.4MB/s), io=465MiB (488MB), run=9868-9868msec 00:19:34.124 ----------------------------------------------------- 00:19:34.124 Suppressions used: 00:19:34.124 count bytes template 00:19:34.124 1 7 /usr/src/fio/parse.c 00:19:34.124 342 32832 /usr/src/fio/iolog.c 00:19:34.124 1 8 libtcmalloc_minimal.so 00:19:34.124 1 904 libcrypto.so 00:19:34.124 ----------------------------------------------------- 00:19:34.124 00:19:34.124 00:19:34.124 real 0m12.721s 00:19:34.124 user 0m12.660s 00:19:34.124 sys 0m0.713s 00:19:34.124 11:04:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:34.124 11:04:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:34.124 ************************************ 00:19:34.124 END TEST bdev_fio_rw_verify 00:19:34.124 ************************************ 00:19:34.124 11:04:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:34.124 11:04:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:34.124 11:04:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:34.124 11:04:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:34.124 11:04:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:19:34.124 11:04:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:19:34.124 11:04:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:19:34.124 11:04:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:19:34.124 11:04:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:34.124 11:04:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:19:34.124 11:04:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:19:34.124 11:04:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:34.124 11:04:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:19:34.124 11:04:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:19:34.124 11:04:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:19:34.124 11:04:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:19:34.124 11:04:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:34.124 11:04:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "de31904d-3bff-4843-b18a-dc6121a72a24"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "de31904d-3bff-4843-b18a-dc6121a72a24",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "de31904d-3bff-4843-b18a-dc6121a72a24",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "805082a5-044b-4802-8e7d-0a38ecb1ac63",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "4dfdf2ce-6a6c-43d8-a138-27e2e4ef1d71",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "85113a7c-a244-4f3c-818d-ac43b0f7e021",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:34.124 11:04:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:34.124 11:04:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:34.124 11:04:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:34.124 /home/vagrant/spdk_repo/spdk 00:19:34.124 11:04:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:34.124 11:04:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:34.124 00:19:34.124 real 0m12.998s 00:19:34.124 user 0m12.779s 00:19:34.124 sys 0m0.839s 00:19:34.124 11:04:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:34.124 11:04:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:34.124 ************************************ 00:19:34.124 END TEST bdev_fio 00:19:34.124 ************************************ 00:19:34.124 11:04:41 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:34.124 11:04:41 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:34.125 11:04:41 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:19:34.125 11:04:41 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:34.125 11:04:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:34.384 ************************************ 00:19:34.384 START TEST bdev_verify 00:19:34.384 ************************************ 00:19:34.384 11:04:41 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:34.384 [2024-11-15 11:04:41.145380] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:19:34.384 [2024-11-15 11:04:41.145499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90720 ] 00:19:34.644 [2024-11-15 11:04:41.318109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:34.644 [2024-11-15 11:04:41.434796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.644 [2024-11-15 11:04:41.434829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.212 Running I/O for 5 seconds... 00:19:37.087 13774.00 IOPS, 53.80 MiB/s [2024-11-15T11:04:45.394Z] 14688.00 IOPS, 57.38 MiB/s [2024-11-15T11:04:45.964Z] 15477.00 IOPS, 60.46 MiB/s [2024-11-15T11:04:47.344Z] 15560.75 IOPS, 60.78 MiB/s [2024-11-15T11:04:47.344Z] 15117.00 IOPS, 59.05 MiB/s 00:19:40.416 Latency(us) 00:19:40.416 [2024-11-15T11:04:47.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.416 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:40.416 Verification LBA range: start 0x0 length 0x2000 00:19:40.416 raid5f : 5.03 7528.69 29.41 0.00 0.00 25476.35 133.25 24497.30 00:19:40.416 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:40.416 Verification LBA range: start 0x2000 length 0x2000 00:19:40.416 raid5f : 5.03 7550.75 29.50 0.00 0.00 25464.97 89.88 24039.41 00:19:40.416 [2024-11-15T11:04:47.344Z] =================================================================================================================== 00:19:40.416 [2024-11-15T11:04:47.344Z] Total : 15079.44 58.90 0.00 0.00 25470.65 89.88 24497.30 00:19:41.796 00:19:41.796 real 0m7.377s 00:19:41.796 user 0m13.667s 00:19:41.796 sys 0m0.256s 00:19:41.796 11:04:48 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:41.796 11:04:48 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:41.796 ************************************ 00:19:41.796 END TEST bdev_verify 00:19:41.796 ************************************ 00:19:41.796 11:04:48 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:41.796 11:04:48 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:19:41.797 11:04:48 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:41.797 11:04:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:41.797 ************************************ 00:19:41.797 START TEST bdev_verify_big_io 00:19:41.797 ************************************ 00:19:41.797 11:04:48 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:41.797 [2024-11-15 11:04:48.583025] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:19:41.797 [2024-11-15 11:04:48.583146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90817 ] 00:19:42.056 [2024-11-15 11:04:48.756289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:42.056 [2024-11-15 11:04:48.870596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.056 [2024-11-15 11:04:48.870632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:42.625 Running I/O for 5 seconds... 00:19:44.502 760.00 IOPS, 47.50 MiB/s [2024-11-15T11:04:52.809Z] 888.00 IOPS, 55.50 MiB/s [2024-11-15T11:04:53.745Z] 930.67 IOPS, 58.17 MiB/s [2024-11-15T11:04:54.693Z] 951.00 IOPS, 59.44 MiB/s [2024-11-15T11:04:54.693Z] 914.00 IOPS, 57.12 MiB/s 00:19:47.765 Latency(us) 00:19:47.765 [2024-11-15T11:04:54.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.765 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:47.765 Verification LBA range: start 0x0 length 0x200 00:19:47.765 raid5f : 5.17 466.88 29.18 0.00 0.00 6815828.92 255.78 304041.25 00:19:47.765 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:47.765 Verification LBA range: start 0x200 length 0x200 00:19:47.765 raid5f : 5.18 465.66 29.10 0.00 0.00 6829105.43 166.34 304041.25 00:19:47.765 [2024-11-15T11:04:54.693Z] =================================================================================================================== 00:19:47.765 [2024-11-15T11:04:54.693Z] Total : 932.55 58.28 0.00 0.00 6822467.17 166.34 304041.25 00:19:49.671 00:19:49.671 real 0m7.574s 00:19:49.671 user 0m14.061s 00:19:49.671 sys 0m0.263s 00:19:49.671 11:04:56 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:49.671 11:04:56 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:49.671 ************************************ 00:19:49.671 END TEST bdev_verify_big_io 00:19:49.671 ************************************ 00:19:49.671 11:04:56 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:49.671 11:04:56 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:19:49.671 11:04:56 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:49.671 11:04:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:49.671 ************************************ 00:19:49.671 START TEST bdev_write_zeroes 00:19:49.671 ************************************ 00:19:49.672 11:04:56 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:49.672 [2024-11-15 11:04:56.231234] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:19:49.672 [2024-11-15 11:04:56.231389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90911 ] 00:19:49.672 [2024-11-15 11:04:56.407428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.672 [2024-11-15 11:04:56.535373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.240 Running I/O for 1 seconds... 00:19:51.185 25479.00 IOPS, 99.53 MiB/s 00:19:51.185 Latency(us) 00:19:51.185 [2024-11-15T11:04:58.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.185 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:51.185 raid5f : 1.01 25478.44 99.53 0.00 0.00 5008.43 1681.33 6982.88 00:19:51.185 [2024-11-15T11:04:58.113Z] =================================================================================================================== 00:19:51.185 [2024-11-15T11:04:58.113Z] Total : 25478.44 99.53 0.00 0.00 5008.43 1681.33 6982.88 00:19:52.566 00:19:52.566 real 0m3.328s 00:19:52.566 user 0m2.965s 00:19:52.566 sys 0m0.234s 00:19:52.566 11:04:59 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:52.566 11:04:59 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:52.566 ************************************ 00:19:52.566 END TEST bdev_write_zeroes 00:19:52.566 ************************************ 00:19:52.825 11:04:59 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:52.825 11:04:59 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:19:52.825 11:04:59 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:52.826 11:04:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:52.826 ************************************ 00:19:52.826 START TEST bdev_json_nonenclosed 00:19:52.826 ************************************ 00:19:52.826 11:04:59 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:52.826 [2024-11-15 11:04:59.609222] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:19:52.826 [2024-11-15 11:04:59.609406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90970 ] 00:19:53.086 [2024-11-15 11:04:59.798714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.086 [2024-11-15 11:04:59.911798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.086 [2024-11-15 11:04:59.911909] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:53.086 [2024-11-15 11:04:59.911935] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:53.086 [2024-11-15 11:04:59.911944] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:53.346 00:19:53.346 real 0m0.642s 00:19:53.346 user 0m0.422s 00:19:53.346 sys 0m0.116s 00:19:53.346 11:05:00 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:53.346 11:05:00 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:53.346 ************************************ 00:19:53.346 END TEST bdev_json_nonenclosed 00:19:53.346 ************************************ 00:19:53.346 11:05:00 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:53.346 11:05:00 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:19:53.346 11:05:00 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:53.346 11:05:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:53.346 ************************************ 00:19:53.346 START TEST bdev_json_nonarray 00:19:53.346 ************************************ 00:19:53.346 11:05:00 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:53.606 [2024-11-15 11:05:00.325740] Starting SPDK v25.01-pre git sha1 1a15c7136 / DPDK 24.03.0 initialization... 00:19:53.606 [2024-11-15 11:05:00.325870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90995 ] 00:19:53.606 [2024-11-15 11:05:00.497521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.866 [2024-11-15 11:05:00.612468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.866 [2024-11-15 11:05:00.612589] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:53.866 [2024-11-15 11:05:00.612610] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:53.866 [2024-11-15 11:05:00.612631] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:54.126 00:19:54.126 real 0m0.615s 00:19:54.126 user 0m0.383s 00:19:54.126 sys 0m0.128s 00:19:54.126 11:05:00 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:54.126 11:05:00 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:54.126 ************************************ 00:19:54.126 END TEST bdev_json_nonarray 00:19:54.126 ************************************ 00:19:54.126 11:05:00 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:19:54.126 11:05:00 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:19:54.126 11:05:00 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:19:54.126 11:05:00 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:19:54.126 11:05:00 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:19:54.126 11:05:00 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:54.126 11:05:00 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:54.126 11:05:00 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:19:54.126 11:05:00 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:19:54.126 11:05:00 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:19:54.126 11:05:00 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:19:54.126 00:19:54.126 real 0m48.411s 00:19:54.126 user 1m5.483s 00:19:54.126 sys 0m4.788s 00:19:54.126 11:05:00 blockdev_raid5f -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:54.126 11:05:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:54.126 ************************************ 00:19:54.126 END TEST blockdev_raid5f 00:19:54.126 ************************************ 00:19:54.126 11:05:00 -- spdk/autotest.sh@194 -- # uname -s 00:19:54.126 11:05:00 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:19:54.126 11:05:00 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:54.126 11:05:00 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:54.126 11:05:00 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:54.126 11:05:00 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:19:54.126 11:05:00 -- spdk/autotest.sh@256 -- # timing_exit lib 00:19:54.126 11:05:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:54.126 11:05:00 -- common/autotest_common.sh@10 -- # set +x 00:19:54.126 11:05:01 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:19:54.126 11:05:01 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:19:54.126 11:05:01 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:19:54.126 11:05:01 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:19:54.126 11:05:01 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:54.126 11:05:01 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:54.126 11:05:01 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:19:54.126 11:05:01 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:19:54.126 11:05:01 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:19:54.127 11:05:01 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:54.127 11:05:01 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:54.127 11:05:01 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:54.127 11:05:01 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:19:54.127 11:05:01 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:54.127 11:05:01 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:19:54.127 11:05:01 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:54.127 11:05:01 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:54.127 11:05:01 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:19:54.127 11:05:01 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:19:54.127 11:05:01 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:19:54.127 11:05:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:54.127 11:05:01 -- common/autotest_common.sh@10 -- # set +x 00:19:54.387 11:05:01 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:19:54.387 11:05:01 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:19:54.387 11:05:01 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:19:54.387 11:05:01 -- common/autotest_common.sh@10 -- # set +x 00:19:56.293 INFO: APP EXITING 00:19:56.294 INFO: killing all VMs 00:19:56.294 INFO: killing vhost app 00:19:56.294 INFO: EXIT DONE 00:19:56.863 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:56.863 Waiting for block devices as requested 00:19:56.863 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:57.122 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:58.058 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:58.058 Cleaning 00:19:58.058 Removing: /var/run/dpdk/spdk0/config 00:19:58.058 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:58.058 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:58.058 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:58.058 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:58.058 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:58.058 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:58.058 Removing: /dev/shm/spdk_tgt_trace.pid57002 00:19:58.058 Removing: /var/run/dpdk/spdk0 00:19:58.058 Removing: /var/run/dpdk/spdk_pid56745 00:19:58.058 Removing: /var/run/dpdk/spdk_pid57002 00:19:58.058 Removing: /var/run/dpdk/spdk_pid57231 00:19:58.058 Removing: /var/run/dpdk/spdk_pid57335 00:19:58.058 Removing: /var/run/dpdk/spdk_pid57391 00:19:58.058 Removing: /var/run/dpdk/spdk_pid57530 00:19:58.058 Removing: /var/run/dpdk/spdk_pid57548 00:19:58.058 Removing: /var/run/dpdk/spdk_pid57758 00:19:58.058 Removing: /var/run/dpdk/spdk_pid57875 00:19:58.058 Removing: /var/run/dpdk/spdk_pid57982 00:19:58.058 Removing: /var/run/dpdk/spdk_pid58110 00:19:58.058 Removing: /var/run/dpdk/spdk_pid58218 00:19:58.058 Removing: /var/run/dpdk/spdk_pid58256 00:19:58.058 Removing: /var/run/dpdk/spdk_pid58294 00:19:58.058 Removing: /var/run/dpdk/spdk_pid58370 00:19:58.058 Removing: /var/run/dpdk/spdk_pid58493 00:19:58.058 Removing: /var/run/dpdk/spdk_pid58953 00:19:58.058 Removing: /var/run/dpdk/spdk_pid59028 00:19:58.058 Removing: /var/run/dpdk/spdk_pid59102 00:19:58.058 Removing: /var/run/dpdk/spdk_pid59128 00:19:58.058 Removing: /var/run/dpdk/spdk_pid59280 00:19:58.058 Removing: /var/run/dpdk/spdk_pid59302 00:19:58.058 Removing: /var/run/dpdk/spdk_pid59455 00:19:58.058 Removing: /var/run/dpdk/spdk_pid59476 00:19:58.058 Removing: /var/run/dpdk/spdk_pid59546 00:19:58.058 Removing: /var/run/dpdk/spdk_pid59570 00:19:58.058 Removing: /var/run/dpdk/spdk_pid59634 00:19:58.058 Removing: /var/run/dpdk/spdk_pid59652 00:19:58.058 Removing: /var/run/dpdk/spdk_pid59858 00:19:58.058 Removing: /var/run/dpdk/spdk_pid59891 00:19:58.058 Removing: /var/run/dpdk/spdk_pid59980 00:19:58.058 Removing: /var/run/dpdk/spdk_pid61342 00:19:58.058 Removing: /var/run/dpdk/spdk_pid61555 00:19:58.058 Removing: /var/run/dpdk/spdk_pid61699 00:19:58.058 Removing: /var/run/dpdk/spdk_pid62348 00:19:58.058 Removing: /var/run/dpdk/spdk_pid62554 00:19:58.058 Removing: /var/run/dpdk/spdk_pid62705 00:19:58.058 Removing: /var/run/dpdk/spdk_pid63343 00:19:58.058 Removing: /var/run/dpdk/spdk_pid63673 00:19:58.058 Removing: /var/run/dpdk/spdk_pid63818 00:19:58.058 Removing: /var/run/dpdk/spdk_pid65211 00:19:58.058 Removing: /var/run/dpdk/spdk_pid65464 00:19:58.058 Removing: /var/run/dpdk/spdk_pid65610 00:19:58.058 Removing: /var/run/dpdk/spdk_pid67001 00:19:58.058 Removing: /var/run/dpdk/spdk_pid67254 00:19:58.058 Removing: /var/run/dpdk/spdk_pid67405 00:19:58.058 Removing: /var/run/dpdk/spdk_pid68790 00:19:58.058 Removing: /var/run/dpdk/spdk_pid69236 00:19:58.058 Removing: /var/run/dpdk/spdk_pid69382 00:19:58.058 Removing: /var/run/dpdk/spdk_pid70869 00:19:58.058 Removing: /var/run/dpdk/spdk_pid71135 00:19:58.058 Removing: /var/run/dpdk/spdk_pid71286 00:19:58.058 Removing: /var/run/dpdk/spdk_pid72781 00:19:58.058 Removing: /var/run/dpdk/spdk_pid73051 00:19:58.058 Removing: /var/run/dpdk/spdk_pid73197 00:19:58.058 Removing: /var/run/dpdk/spdk_pid74683 00:19:58.058 Removing: /var/run/dpdk/spdk_pid75176 00:19:58.318 Removing: /var/run/dpdk/spdk_pid75316 00:19:58.318 Removing: /var/run/dpdk/spdk_pid75460 00:19:58.318 Removing: /var/run/dpdk/spdk_pid75891 00:19:58.318 Removing: /var/run/dpdk/spdk_pid76631 00:19:58.318 Removing: /var/run/dpdk/spdk_pid77028 00:19:58.318 Removing: /var/run/dpdk/spdk_pid77717 00:19:58.318 Removing: /var/run/dpdk/spdk_pid78163 00:19:58.318 Removing: /var/run/dpdk/spdk_pid78922 00:19:58.318 Removing: /var/run/dpdk/spdk_pid79332 00:19:58.318 Removing: /var/run/dpdk/spdk_pid81303 00:19:58.318 Removing: /var/run/dpdk/spdk_pid81747 00:19:58.318 Removing: /var/run/dpdk/spdk_pid82189 00:19:58.318 Removing: /var/run/dpdk/spdk_pid84284 00:19:58.318 Removing: /var/run/dpdk/spdk_pid84771 00:19:58.318 Removing: /var/run/dpdk/spdk_pid85293 00:19:58.318 Removing: /var/run/dpdk/spdk_pid86365 00:19:58.318 Removing: /var/run/dpdk/spdk_pid86688 00:19:58.318 Removing: /var/run/dpdk/spdk_pid87625 00:19:58.318 Removing: /var/run/dpdk/spdk_pid87953 00:19:58.318 Removing: /var/run/dpdk/spdk_pid88898 00:19:58.318 Removing: /var/run/dpdk/spdk_pid89221 00:19:58.318 Removing: /var/run/dpdk/spdk_pid89903 00:19:58.318 Removing: /var/run/dpdk/spdk_pid90185 00:19:58.318 Removing: /var/run/dpdk/spdk_pid90251 00:19:58.318 Removing: /var/run/dpdk/spdk_pid90299 00:19:58.318 Removing: /var/run/dpdk/spdk_pid90541 00:19:58.318 Removing: /var/run/dpdk/spdk_pid90720 00:19:58.318 Removing: /var/run/dpdk/spdk_pid90817 00:19:58.318 Removing: /var/run/dpdk/spdk_pid90911 00:19:58.318 Removing: /var/run/dpdk/spdk_pid90970 00:19:58.318 Removing: /var/run/dpdk/spdk_pid90995 00:19:58.318 Clean 00:19:58.318 11:05:05 -- common/autotest_common.sh@1451 -- # return 0 00:19:58.318 11:05:05 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:19:58.318 11:05:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:58.318 11:05:05 -- common/autotest_common.sh@10 -- # set +x 00:19:58.318 11:05:05 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:19:58.318 11:05:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:58.318 11:05:05 -- common/autotest_common.sh@10 -- # set +x 00:19:58.577 11:05:05 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:58.577 11:05:05 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:58.577 11:05:05 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:58.577 11:05:05 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:19:58.577 11:05:05 -- spdk/autotest.sh@394 -- # hostname 00:19:58.577 11:05:05 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:58.577 geninfo: WARNING: invalid characters removed from testname! 00:20:20.524 11:05:27 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:23.812 11:05:30 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:25.734 11:05:32 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:28.318 11:05:34 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:30.225 11:05:37 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:32.758 11:05:39 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:34.664 11:05:41 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:34.664 11:05:41 -- spdk/autorun.sh@1 -- $ timing_finish 00:20:34.664 11:05:41 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:20:34.664 11:05:41 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:34.664 11:05:41 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:34.664 11:05:41 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:34.664 + [[ -n 5419 ]] 00:20:34.664 + sudo kill 5419 00:20:34.673 [Pipeline] } 00:20:34.688 [Pipeline] // timeout 00:20:34.694 [Pipeline] } 00:20:34.709 [Pipeline] // stage 00:20:34.715 [Pipeline] } 00:20:34.729 [Pipeline] // catchError 00:20:34.737 [Pipeline] stage 00:20:34.739 [Pipeline] { (Stop VM) 00:20:34.752 [Pipeline] sh 00:20:35.034 + vagrant halt 00:20:37.569 ==> default: Halting domain... 00:20:45.723 [Pipeline] sh 00:20:46.004 + vagrant destroy -f 00:20:48.613 ==> default: Removing domain... 00:20:48.884 [Pipeline] sh 00:20:49.165 + mv output /var/jenkins/workspace/raid-vg-autotest_2/output 00:20:49.174 [Pipeline] } 00:20:49.188 [Pipeline] // stage 00:20:49.191 [Pipeline] } 00:20:49.203 [Pipeline] // dir 00:20:49.208 [Pipeline] } 00:20:49.221 [Pipeline] // wrap 00:20:49.225 [Pipeline] } 00:20:49.239 [Pipeline] // catchError 00:20:49.262 [Pipeline] stage 00:20:49.266 [Pipeline] { (Epilogue) 00:20:49.278 [Pipeline] sh 00:20:49.562 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:55.009 [Pipeline] catchError 00:20:55.011 [Pipeline] { 00:20:55.024 [Pipeline] sh 00:20:55.309 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:55.309 Artifacts sizes are good 00:20:55.318 [Pipeline] } 00:20:55.333 [Pipeline] // catchError 00:20:55.345 [Pipeline] archiveArtifacts 00:20:55.352 Archiving artifacts 00:20:55.463 [Pipeline] cleanWs 00:20:55.476 [WS-CLEANUP] Deleting project workspace... 00:20:55.476 [WS-CLEANUP] Deferred wipeout is used... 00:20:55.482 [WS-CLEANUP] done 00:20:55.484 [Pipeline] } 00:20:55.500 [Pipeline] // stage 00:20:55.505 [Pipeline] } 00:20:55.519 [Pipeline] // node 00:20:55.525 [Pipeline] End of Pipeline 00:20:55.563 Finished: SUCCESS